Test Report: Hyperkit_macOS 19312

                    
                      5c64880be4606435f09036ce2ec4c937eccc350b:2024-07-28:35539
                    
                

Test fail (25/227)

x
+
TestOffline (195.25s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-461000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-461000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m9.851920666s)

                                                
                                                
-- stdout --
	* [offline-docker-461000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-461000" primary control-plane node in "offline-docker-461000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-461000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:59:53.439233    5252 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:59:53.439519    5252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:59:53.439524    5252 out.go:304] Setting ErrFile to fd 2...
	I0728 18:59:53.439528    5252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:59:53.439721    5252 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 18:59:53.441463    5252 out.go:298] Setting JSON to false
	I0728 18:59:53.467273    5252 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5364,"bootTime":1722213029,"procs":421,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 18:59:53.467387    5252 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:59:53.525067    5252 out.go:177] * [offline-docker-461000] minikube v1.33.1 on Darwin 14.5
	I0728 18:59:53.566412    5252 notify.go:220] Checking for updates...
	I0728 18:59:53.592020    5252 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:59:53.653050    5252 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:59:53.705191    5252 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 18:59:53.764231    5252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:59:53.785073    5252 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 18:59:53.806125    5252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:59:53.832736    5252 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:59:53.876497    5252 out.go:177] * Using the hyperkit driver based on user configuration
	I0728 18:59:53.919369    5252 start.go:297] selected driver: hyperkit
	I0728 18:59:53.919392    5252 start.go:901] validating driver "hyperkit" against <nil>
	I0728 18:59:53.919413    5252 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:59:53.923664    5252 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:59:53.923778    5252 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 18:59:53.931868    5252 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 18:59:53.935435    5252 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:59:53.935470    5252 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 18:59:53.935506    5252 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:59:53.935721    5252 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:59:53.935752    5252 cni.go:84] Creating CNI manager for ""
	I0728 18:59:53.935771    5252 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:59:53.935775    5252 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 18:59:53.935843    5252 start.go:340] cluster config:
	{Name:offline-docker-461000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-461000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:59:53.935919    5252 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:59:53.981991    5252 out.go:177] * Starting "offline-docker-461000" primary control-plane node in "offline-docker-461000" cluster
	I0728 18:59:54.003141    5252 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:59:54.003189    5252 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 18:59:54.003206    5252 cache.go:56] Caching tarball of preloaded images
	I0728 18:59:54.003338    5252 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 18:59:54.003350    5252 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:59:54.003725    5252 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/offline-docker-461000/config.json ...
	I0728 18:59:54.003753    5252 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/offline-docker-461000/config.json: {Name:mka8cae696cf61fe4adf046170e83f1c0c0bdaf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:59:54.004092    5252 start.go:360] acquireMachinesLock for offline-docker-461000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:59:54.004151    5252 start.go:364] duration metric: took 44.199µs to acquireMachinesLock for "offline-docker-461000"
	I0728 18:59:54.004174    5252 start.go:93] Provisioning new machine with config: &{Name:offline-docker-461000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-461000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:59:54.004222    5252 start.go:125] createHost starting for "" (driver="hyperkit")
	I0728 18:59:54.067316    5252 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0728 18:59:54.067617    5252 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:59:54.067686    5252 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:59:54.077420    5252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53432
	I0728 18:59:54.077794    5252 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:59:54.078192    5252 main.go:141] libmachine: Using API Version  1
	I0728 18:59:54.078208    5252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:59:54.078417    5252 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:59:54.078538    5252 main.go:141] libmachine: (offline-docker-461000) Calling .GetMachineName
	I0728 18:59:54.078653    5252 main.go:141] libmachine: (offline-docker-461000) Calling .DriverName
	I0728 18:59:54.078780    5252 start.go:159] libmachine.API.Create for "offline-docker-461000" (driver="hyperkit")
	I0728 18:59:54.078800    5252 client.go:168] LocalClient.Create starting
	I0728 18:59:54.078832    5252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem
	I0728 18:59:54.078891    5252 main.go:141] libmachine: Decoding PEM data...
	I0728 18:59:54.078905    5252 main.go:141] libmachine: Parsing certificate...
	I0728 18:59:54.078972    5252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem
	I0728 18:59:54.079011    5252 main.go:141] libmachine: Decoding PEM data...
	I0728 18:59:54.079024    5252 main.go:141] libmachine: Parsing certificate...
	I0728 18:59:54.079039    5252 main.go:141] libmachine: Running pre-create checks...
	I0728 18:59:54.079049    5252 main.go:141] libmachine: (offline-docker-461000) Calling .PreCreateCheck
	I0728 18:59:54.079136    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:59:54.079333    5252 main.go:141] libmachine: (offline-docker-461000) Calling .GetConfigRaw
	I0728 18:59:54.109388    5252 main.go:141] libmachine: Creating machine...
	I0728 18:59:54.109409    5252 main.go:141] libmachine: (offline-docker-461000) Calling .Create
	I0728 18:59:54.109646    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:59:54.110038    5252 main.go:141] libmachine: (offline-docker-461000) DBG | I0728 18:59:54.109620    5274 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 18:59:54.110112    5252 main.go:141] libmachine: (offline-docker-461000) Downloading /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1006/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0728 18:59:54.511798    5252 main.go:141] libmachine: (offline-docker-461000) DBG | I0728 18:59:54.511684    5274 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/id_rsa...
	I0728 18:59:54.649346    5252 main.go:141] libmachine: (offline-docker-461000) DBG | I0728 18:59:54.649259    5274 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/offline-docker-461000.rawdisk...
	I0728 18:59:54.649371    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Writing magic tar header
	I0728 18:59:54.649384    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Writing SSH key tar header
	I0728 18:59:54.649697    5252 main.go:141] libmachine: (offline-docker-461000) DBG | I0728 18:59:54.649659    5274 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000 ...
	I0728 18:59:55.127261    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:59:55.127323    5252 main.go:141] libmachine: (offline-docker-461000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/hyperkit.pid
	I0728 18:59:55.127343    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Using UUID aa0e75d1-0db8-4df6-8d5d-d536acbd0223
	I0728 18:59:55.291669    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Generated MAC 3a:a2:25:33:1f:8c
	I0728 18:59:55.291686    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-461000
	I0728 18:59:55.291739    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"aa0e75d1-0db8-4df6-8d5d-d536acbd0223", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0728 18:59:55.291772    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"aa0e75d1-0db8-4df6-8d5d-d536acbd0223", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0728 18:59:55.291859    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "aa0e75d1-0db8-4df6-8d5d-d536acbd0223", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/offline-docker-461000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/bzimage,
/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-461000"}
	I0728 18:59:55.291918    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U aa0e75d1-0db8-4df6-8d5d-d536acbd0223 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/offline-docker-461000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machi
nes/offline-docker-461000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-461000"
	I0728 18:59:55.291936    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 18:59:55.294974    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 DEBUG: hyperkit: Pid is 5298
	I0728 18:59:55.295413    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 0
	I0728 18:59:55.295428    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:59:55.295537    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 18:59:55.296539    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 18:59:55.296621    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 18:59:55.296636    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 18:59:55.296653    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 18:59:55.296668    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 18:59:55.296681    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 18:59:55.296694    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 18:59:55.296708    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 18:59:55.296717    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:59:55.296725    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:59:55.296730    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:59:55.296749    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:59:55.296769    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:59:55.296778    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:59:55.296785    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:59:55.296792    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:59:55.296800    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:59:55.296808    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:59:55.296816    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:59:55.302610    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 18:59:55.433341    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 18:59:55.433974    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:59:55.433994    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:59:55.434003    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:59:55.434012    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:59:55.811799    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 18:59:55.811816    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 18:59:55.926763    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:59:55.926784    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:59:55.926798    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:59:55.926804    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:59:55.927610    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 18:59:55.927622    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 18:59:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 18:59:57.297281    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 1
	I0728 18:59:57.297298    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:59:57.297344    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 18:59:57.298118    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 18:59:57.298131    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 18:59:57.298140    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 18:59:57.298147    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 18:59:57.298156    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 18:59:57.298164    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 18:59:57.298171    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 18:59:57.298178    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 18:59:57.298190    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:59:57.298198    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:59:57.298206    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:59:57.298211    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:59:57.298224    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:59:57.298239    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:59:57.298247    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:59:57.298257    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:59:57.298275    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:59:57.298293    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:59:57.298307    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:59:59.299300    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 2
	I0728 18:59:59.299317    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:59:59.299378    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 18:59:59.300211    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 18:59:59.300257    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 18:59:59.300268    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 18:59:59.300279    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 18:59:59.300286    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 18:59:59.300293    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 18:59:59.300301    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 18:59:59.300315    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 18:59:59.300328    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:59:59.300335    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:59:59.300344    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:59:59.300365    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:59:59.300382    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:59:59.300396    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:59:59.300408    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:59:59.300417    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:59:59.300425    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:59:59.300432    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:59:59.300440    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:01.301109    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 3
	I0728 19:00:01.301131    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:01.301217    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:01.302215    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:01.302266    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:01.302277    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:01.302284    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:01.302290    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:01.302314    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:01.302330    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:01.302339    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:01.302353    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:01.302378    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:01.302389    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:01.302403    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:01.302412    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:01.302419    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:01.302428    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:01.302437    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:01.302445    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:01.302453    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:01.302469    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:01.333105    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:00:01 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0728 19:00:01.333266    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:00:01 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0728 19:00:01.333276    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:00:01 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0728 19:00:01.353507    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:00:01 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0728 19:00:03.303836    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 4
	I0728 19:00:03.303853    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:03.303957    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:03.304714    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:03.304775    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:03.304787    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:03.304796    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:03.304803    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:03.304824    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:03.304833    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:03.304846    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:03.304861    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:03.304880    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:03.304888    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:03.304896    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:03.304905    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:03.304913    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:03.304920    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:03.304927    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:03.304934    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:03.304941    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:03.304949    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:05.305267    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 5
	I0728 19:00:05.305280    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:05.305320    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:05.306115    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:05.306163    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:05.306173    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:05.306185    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:05.306191    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:05.306198    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:05.306204    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:05.306226    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:05.306235    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:05.306244    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:05.306260    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:05.306275    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:05.306283    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:05.306291    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:05.306299    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:05.306306    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:05.306314    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:05.306321    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:05.306329    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:07.308317    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 6
	I0728 19:00:07.308334    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:07.308414    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:07.309182    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:07.309249    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:07.309261    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:07.309271    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:07.309281    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:07.309292    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:07.309300    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:07.309307    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:07.309331    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:07.309350    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:07.309361    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:07.309370    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:07.309386    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:07.309398    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:07.309415    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:07.309425    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:07.309432    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:07.309441    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:07.309452    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:09.310381    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 7
	I0728 19:00:09.310393    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:09.310511    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:09.311277    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:09.311335    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:09.311347    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:09.311412    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:09.311447    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:09.311455    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:09.311469    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:09.311479    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:09.311487    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:09.311494    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:09.311502    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:09.311509    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:09.311517    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:09.311525    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:09.311535    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:09.311542    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:09.311549    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:09.311556    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:09.311563    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:11.312348    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 8
	I0728 19:00:11.312371    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:11.312471    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:11.313339    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:11.313394    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:11.313404    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:11.313412    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:11.313418    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:11.313437    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:11.313451    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:11.313469    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:11.313482    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:11.313495    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:11.313507    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:11.313527    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:11.313547    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:11.313556    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:11.313566    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:11.313573    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:11.313589    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:11.313597    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:11.313605    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:13.313567    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 9
	I0728 19:00:13.313585    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:13.313626    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:13.314391    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:13.314429    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:13.314456    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:13.314473    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:13.314484    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:13.314493    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:13.314500    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:13.314520    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:13.314536    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:13.314547    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:13.314555    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:13.314564    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:13.314571    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:13.314579    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:13.314586    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:13.314593    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:13.314602    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:13.314610    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:13.314618    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:15.315544    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 10
	I0728 19:00:15.315560    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:15.315666    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:15.316468    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:15.316491    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:15.316509    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:15.316520    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:15.316525    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:15.316532    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:15.316538    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:15.316544    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:15.316551    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:15.316556    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:15.316575    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:15.316600    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:15.316617    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:15.316630    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:15.316638    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:15.316646    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:15.316658    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:15.316667    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:15.316675    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:17.317051    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 11
	I0728 19:00:17.317074    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:17.317106    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:17.317857    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:17.317927    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:17.317940    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:17.317948    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:17.317966    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:17.317974    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:17.317982    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:17.317993    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:17.318000    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:17.318008    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:17.318015    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:17.318023    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:17.318030    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:17.318038    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:17.318045    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:17.318053    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:17.318060    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:17.318067    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:17.318076    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:19.319085    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 12
	I0728 19:00:19.319098    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:19.319134    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:19.319890    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:19.319972    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:19.319982    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:19.319989    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:19.319997    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:19.320016    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:19.320027    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:19.320035    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:19.320041    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:19.320052    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:19.320060    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:19.320067    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:19.320072    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:19.320089    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:19.320109    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:19.320121    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:19.320131    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:19.320138    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:19.320148    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:21.320329    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 13
	I0728 19:00:21.320344    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:21.320353    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:21.321362    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:21.321398    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:21.321406    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:21.321417    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:21.321424    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:21.321439    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:21.321450    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:21.321471    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:21.321484    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:21.321494    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:21.321501    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:21.321515    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:21.321524    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:21.321531    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:21.321539    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:21.321546    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:21.321554    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:21.321562    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:21.321569    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:23.323556    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 14
	I0728 19:00:23.323568    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:23.323665    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:23.324400    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:23.324447    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:23.324465    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:23.324481    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:23.324510    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:23.324522    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:23.324544    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:23.324554    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:23.324566    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:23.324584    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:23.324599    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:23.324613    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:23.324621    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:23.324630    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:23.324639    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:23.324647    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:23.324662    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:23.324676    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:23.324686    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:25.326622    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 15
	I0728 19:00:25.326651    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:25.326761    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:25.327557    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:25.327590    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:25.327606    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:25.327623    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:25.327632    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:25.327639    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:25.327646    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:25.327666    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:25.327690    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:25.327700    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:25.327714    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:25.327727    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:25.327737    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:25.327745    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:25.327753    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:25.327761    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:25.327768    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:25.327774    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:25.327782    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:27.329752    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 16
	I0728 19:00:27.329767    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:27.329833    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:27.330596    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:27.330643    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:27.330653    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:27.330661    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:27.330672    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:27.330688    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:27.330696    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:27.330709    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:27.330716    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:27.330723    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:27.330733    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:27.330741    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:27.330749    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:27.330755    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:27.330760    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:27.330767    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:27.330775    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:27.330782    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:27.330789    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:29.332782    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 17
	I0728 19:00:29.332800    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:29.332914    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:29.334008    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:29.334057    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:29.334070    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:29.334079    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:29.334088    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:29.334118    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:29.334130    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:29.334137    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:29.334143    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:29.334150    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:29.334158    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:29.334165    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:29.334171    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:29.334186    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:29.334198    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:29.334206    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:29.334217    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:29.334229    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:29.334243    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:31.334548    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 18
	I0728 19:00:31.334564    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:31.334716    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:31.335477    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:31.335527    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:31.335543    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:31.335562    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:31.335579    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:31.335587    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:31.335593    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:31.335614    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:31.335626    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:31.335634    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:31.335643    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:31.335651    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:31.335658    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:31.335666    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:31.335677    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:31.335690    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:31.335697    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:31.335706    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:31.335722    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:33.337726    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 19
	I0728 19:00:33.337745    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:33.337781    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:33.338581    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:33.338625    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:33.338637    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:33.338681    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:33.338695    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:33.338706    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:33.338712    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:33.338720    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:33.338729    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:33.338735    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:33.338742    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:33.338752    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:33.338758    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:33.338765    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:33.338773    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:33.338780    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:33.338787    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:33.338799    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:33.338809    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:35.340403    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 20
	I0728 19:00:35.340419    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:35.340472    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:35.341448    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:35.341501    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:35.341513    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:35.341521    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:35.341527    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:35.341533    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:35.341541    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:35.341555    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:35.341562    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:35.341569    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:35.341575    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:35.341593    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:35.341601    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:35.341609    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:35.341617    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:35.341623    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:35.341632    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:35.341639    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:35.341647    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:37.343272    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 21
	I0728 19:00:37.343288    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:37.343411    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:37.344248    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:37.344293    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:37.344320    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:37.344353    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:37.344359    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:37.344365    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:37.344372    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:37.344379    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:37.344391    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:37.344409    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:37.344422    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:37.344440    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:37.344448    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:37.344460    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:37.344468    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:37.344475    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:37.344481    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:37.344488    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:37.344494    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:39.345084    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 22
	I0728 19:00:39.345101    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:39.345183    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:39.346004    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:39.346056    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:39.346069    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:39.346085    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:39.346096    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:39.346104    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:39.346111    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:39.346118    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:39.346123    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:39.346141    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:39.346150    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:39.346163    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:39.346172    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:39.346193    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:39.346207    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:39.346216    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:39.346224    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:39.346232    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:39.346240    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:41.347042    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 23
	I0728 19:00:41.347062    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:41.347153    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:41.347913    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:41.347973    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:41.347984    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:41.347995    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:41.348004    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:41.348013    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:41.348024    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:41.348037    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:41.348045    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:41.348053    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:41.348060    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:41.348073    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:41.348084    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:41.348092    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:41.348100    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:41.348108    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:41.348115    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:41.348128    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:41.348138    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:43.349824    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 24
	I0728 19:00:43.349839    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:43.349942    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:43.350760    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:43.350787    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:43.350800    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:43.350809    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:43.350817    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:43.350825    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:43.350831    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:43.350839    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:43.350862    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:43.350873    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:43.350882    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:43.350889    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:43.350895    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:43.350902    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:43.350911    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:43.350919    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:43.350935    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:43.350943    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:43.350951    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:45.352218    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 25
	I0728 19:00:45.352233    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:45.352320    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:45.353083    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:45.353134    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:45.353142    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:45.353162    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:45.353174    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:45.353183    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:45.353191    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:45.353202    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:45.353212    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:45.353220    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:45.353228    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:45.353235    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:45.353255    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:45.353262    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:45.353269    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:45.353277    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:45.353289    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:45.353297    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:45.353305    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:47.353581    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 26
	I0728 19:00:47.353595    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:47.353669    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:47.354446    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:47.354460    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:47.354473    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:47.354493    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:47.354501    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:47.354508    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:47.354515    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:47.354522    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:47.354527    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:47.354539    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:47.354553    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:47.354560    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:47.354567    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:47.354573    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:47.354579    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:47.354585    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:47.354592    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:47.354598    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:47.354604    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:49.356607    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 27
	I0728 19:00:49.356621    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:49.356720    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:49.357490    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:49.357531    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:49.357544    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:49.357555    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:49.357563    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:49.357568    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:49.357575    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:49.357585    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:49.357600    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:49.357613    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:49.357623    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:49.357631    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:49.357639    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:49.357656    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:49.357670    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:49.357682    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:49.357690    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:49.357697    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:49.357711    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:51.358727    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 28
	I0728 19:00:51.358742    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:51.358844    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:51.359622    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:51.359677    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:51.359693    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:51.359704    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:51.359724    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:51.359739    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:51.359753    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:51.359762    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:51.359778    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:51.359787    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:51.359794    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:51.359803    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:51.359814    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:51.359825    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:51.359832    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:51.359840    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:51.359847    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:51.359855    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:51.359864    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:53.360592    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 29
	I0728 19:00:53.360608    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:53.360808    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:53.361583    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for 3a:a2:25:33:1f:8c in /var/db/dhcpd_leases ...
	I0728 19:00:53.361639    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:53.361652    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:53.361660    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:53.361668    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:53.361679    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:53.361685    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:53.361691    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:53.361698    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:53.361707    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:53.361716    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:53.361724    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:53.361733    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:53.361759    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:53.361772    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:53.361781    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:53.361790    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:53.361799    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:53.361807    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:55.362426    5252 client.go:171] duration metric: took 1m1.284056056s to LocalClient.Create
	I0728 19:00:57.362946    5252 start.go:128] duration metric: took 1m3.359157421s to createHost
	I0728 19:00:57.362961    5252 start.go:83] releasing machines lock for "offline-docker-461000", held for 1m3.359261567s
	W0728 19:00:57.362978    5252 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3a:a2:25:33:1f:8c
	I0728 19:00:57.363388    5252 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:00:57.363429    5252 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:00:57.372582    5252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53470
	I0728 19:00:57.372998    5252 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:00:57.373405    5252 main.go:141] libmachine: Using API Version  1
	I0728 19:00:57.373429    5252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:00:57.373701    5252 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:00:57.374122    5252 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:00:57.374143    5252 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:00:57.382804    5252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53472
	I0728 19:00:57.383274    5252 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:00:57.383662    5252 main.go:141] libmachine: Using API Version  1
	I0728 19:00:57.383676    5252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:00:57.383889    5252 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:00:57.384011    5252 main.go:141] libmachine: (offline-docker-461000) Calling .GetState
	I0728 19:00:57.384095    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:57.384162    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:57.385097    5252 main.go:141] libmachine: (offline-docker-461000) Calling .DriverName
	I0728 19:00:57.426116    5252 out.go:177] * Deleting "offline-docker-461000" in hyperkit ...
	I0728 19:00:57.447325    5252 main.go:141] libmachine: (offline-docker-461000) Calling .Remove
	I0728 19:00:57.447455    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:57.447464    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:57.447533    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:57.448467    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:57.448520    5252 main.go:141] libmachine: (offline-docker-461000) DBG | waiting for graceful shutdown
	I0728 19:00:58.450136    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:58.450312    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:58.451229    5252 main.go:141] libmachine: (offline-docker-461000) DBG | waiting for graceful shutdown
	I0728 19:00:59.452288    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:59.452350    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:00:59.453984    5252 main.go:141] libmachine: (offline-docker-461000) DBG | waiting for graceful shutdown
	I0728 19:01:00.455417    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:00.455544    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:01:00.456110    5252 main.go:141] libmachine: (offline-docker-461000) DBG | waiting for graceful shutdown
	I0728 19:01:01.456237    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:01.456267    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:01:01.456856    5252 main.go:141] libmachine: (offline-docker-461000) DBG | waiting for graceful shutdown
	I0728 19:01:02.458742    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:02.458832    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5298
	I0728 19:01:02.459946    5252 main.go:141] libmachine: (offline-docker-461000) DBG | sending sigkill
	I0728 19:01:02.459955    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0728 19:01:02.472023    5252 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3a:a2:25:33:1f:8c
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3a:a2:25:33:1f:8c
	I0728 19:01:02.472042    5252 start.go:729] Will try again in 5 seconds ...
	I0728 19:01:02.481807    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:01:02 WARN : hyperkit: failed to read stdout: EOF
	I0728 19:01:02.481863    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:01:02 WARN : hyperkit: failed to read stderr: EOF
	I0728 19:01:07.472480    5252 start.go:360] acquireMachinesLock for offline-docker-461000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 19:02:00.214805    5252 start.go:364] duration metric: took 52.742667644s to acquireMachinesLock for "offline-docker-461000"
	I0728 19:02:00.214841    5252 start.go:93] Provisioning new machine with config: &{Name:offline-docker-461000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-461000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 19:02:00.214895    5252 start.go:125] createHost starting for "" (driver="hyperkit")
	I0728 19:02:00.236261    5252 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0728 19:02:00.236356    5252 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:02:00.236386    5252 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:02:00.244751    5252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53483
	I0728 19:02:00.245083    5252 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:02:00.245467    5252 main.go:141] libmachine: Using API Version  1
	I0728 19:02:00.245487    5252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:02:00.245699    5252 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:02:00.245816    5252 main.go:141] libmachine: (offline-docker-461000) Calling .GetMachineName
	I0728 19:02:00.245916    5252 main.go:141] libmachine: (offline-docker-461000) Calling .DriverName
	I0728 19:02:00.246050    5252 start.go:159] libmachine.API.Create for "offline-docker-461000" (driver="hyperkit")
	I0728 19:02:00.246073    5252 client.go:168] LocalClient.Create starting
	I0728 19:02:00.246102    5252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem
	I0728 19:02:00.246157    5252 main.go:141] libmachine: Decoding PEM data...
	I0728 19:02:00.246167    5252 main.go:141] libmachine: Parsing certificate...
	I0728 19:02:00.246205    5252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem
	I0728 19:02:00.246246    5252 main.go:141] libmachine: Decoding PEM data...
	I0728 19:02:00.246258    5252 main.go:141] libmachine: Parsing certificate...
	I0728 19:02:00.246269    5252 main.go:141] libmachine: Running pre-create checks...
	I0728 19:02:00.246273    5252 main.go:141] libmachine: (offline-docker-461000) Calling .PreCreateCheck
	I0728 19:02:00.246362    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:00.246391    5252 main.go:141] libmachine: (offline-docker-461000) Calling .GetConfigRaw
	I0728 19:02:00.298114    5252 main.go:141] libmachine: Creating machine...
	I0728 19:02:00.298123    5252 main.go:141] libmachine: (offline-docker-461000) Calling .Create
	I0728 19:02:00.298212    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:00.298363    5252 main.go:141] libmachine: (offline-docker-461000) DBG | I0728 19:02:00.298202    5706 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 19:02:00.298394    5252 main.go:141] libmachine: (offline-docker-461000) Downloading /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1006/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0728 19:02:00.498290    5252 main.go:141] libmachine: (offline-docker-461000) DBG | I0728 19:02:00.498190    5706 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/id_rsa...
	I0728 19:02:00.563351    5252 main.go:141] libmachine: (offline-docker-461000) DBG | I0728 19:02:00.563284    5706 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/offline-docker-461000.rawdisk...
	I0728 19:02:00.563361    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Writing magic tar header
	I0728 19:02:00.563428    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Writing SSH key tar header
	I0728 19:02:00.563958    5252 main.go:141] libmachine: (offline-docker-461000) DBG | I0728 19:02:00.563911    5706 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000 ...
	I0728 19:02:00.937666    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:00.937686    5252 main.go:141] libmachine: (offline-docker-461000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/hyperkit.pid
	I0728 19:02:00.937729    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Using UUID db6c733e-7507-4568-9751-4f7a56ba90a2
	I0728 19:02:00.962608    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Generated MAC ca:6f:ae:7c:ea:36
	I0728 19:02:00.962630    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-461000
	I0728 19:02:00.962687    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:00 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"db6c733e-7507-4568-9751-4f7a56ba90a2", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0728 19:02:00.962718    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:00 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"db6c733e-7507-4568-9751-4f7a56ba90a2", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0728 19:02:00.962766    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:00 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "db6c733e-7507-4568-9751-4f7a56ba90a2", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/offline-docker-461000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/bzimage,
/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-461000"}
	I0728 19:02:00.962797    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:00 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U db6c733e-7507-4568-9751-4f7a56ba90a2 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/offline-docker-461000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machi
nes/offline-docker-461000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-461000"
	I0728 19:02:00.962818    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:00 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 19:02:00.965825    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:00 DEBUG: hyperkit: Pid is 5707
	I0728 19:02:00.966915    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 0
	I0728 19:02:00.966929    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:00.966995    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:00.967959    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:00.968007    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:00.968018    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:00.968027    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:00.968038    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:00.968045    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:00.968054    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:00.968064    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:00.968073    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:00.968105    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:00.968128    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:00.968137    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:00.968143    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:00.968155    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:00.968161    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:00.968167    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:00.968183    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:00.968199    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:00.968214    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:00.973785    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:00 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 19:02:00.981944    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:00 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/offline-docker-461000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 19:02:00.982826    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 19:02:00.982840    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 19:02:00.982850    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 19:02:00.982862    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 19:02:01.360212    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 19:02:01.360226    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 19:02:01.475222    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 19:02:01.475249    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 19:02:01.475263    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 19:02:01.475278    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 19:02:01.476080    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 19:02:01.476092    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 19:02:02.968688    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 1
	I0728 19:02:02.968703    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:02.968801    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:02.969588    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:02.969643    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:02.969654    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:02.969664    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:02.969674    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:02.969686    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:02.969693    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:02.969703    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:02.969724    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:02.969734    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:02.969742    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:02.969749    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:02.969757    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:02.969764    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:02.969771    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:02.969785    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:02.969801    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:02.969815    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:02.969830    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:04.971818    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 2
	I0728 19:02:04.971834    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:04.971866    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:04.972720    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:04.972768    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:04.972781    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:04.972795    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:04.972811    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:04.972820    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:04.972830    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:04.972847    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:04.972859    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:04.972874    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:04.972888    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:04.972897    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:04.972903    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:04.972918    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:04.972932    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:04.972940    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:04.972948    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:04.972958    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:04.972968    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:06.899572    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:06 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0728 19:02:06.899698    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:06 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0728 19:02:06.899707    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:06 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0728 19:02:06.919551    5252 main.go:141] libmachine: (offline-docker-461000) DBG | 2024/07/28 19:02:06 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0728 19:02:06.974127    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 3
	I0728 19:02:06.974156    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:06.974344    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:06.975774    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:06.975883    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:06.975905    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:06.975923    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:06.975936    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:06.975981    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:06.976007    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:06.976023    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:06.976039    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:06.976054    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:06.976071    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:06.976101    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:06.976112    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:06.976144    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:06.976162    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:06.976189    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:06.976206    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:06.976217    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:06.976228    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:08.977069    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 4
	I0728 19:02:08.977084    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:08.977201    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:08.977996    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:08.978043    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:08.978062    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:08.978080    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:08.978087    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:08.978093    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:08.978099    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:08.978116    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:08.978128    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:08.978139    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:08.978149    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:08.978156    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:08.978172    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:08.978179    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:08.978187    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:08.978194    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:08.978202    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:08.978212    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:08.978219    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:10.978839    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 5
	I0728 19:02:10.978850    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:10.978918    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:10.979699    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:10.979748    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:10.979763    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:10.979776    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:10.979785    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:10.979806    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:10.979820    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:10.979827    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:10.979836    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:10.979843    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:10.979861    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:10.979874    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:10.979893    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:10.979907    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:10.979917    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:10.979925    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:10.979934    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:10.979950    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:10.979963    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:12.979936    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 6
	I0728 19:02:12.979949    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:12.980000    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:12.980915    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:12.980955    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:12.980967    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:12.980981    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:12.980989    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:12.980996    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:12.981002    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:12.981009    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:12.981016    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:12.981038    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:12.981048    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:12.981059    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:12.981068    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:12.981074    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:12.981081    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:12.981089    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:12.981096    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:12.981105    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:12.981120    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:14.982080    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 7
	I0728 19:02:14.982097    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:14.982192    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:14.982970    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:14.983023    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:14.983031    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:14.983040    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:14.983049    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:14.983064    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:14.983077    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:14.983086    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:14.983093    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:14.983101    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:14.983108    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:14.983127    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:14.983138    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:14.983147    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:14.983155    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:14.983162    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:14.983184    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:14.983193    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:14.983209    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:16.984348    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 8
	I0728 19:02:16.984364    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:16.984502    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:16.985264    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:16.985322    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:16.985333    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:16.985349    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:16.985359    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:16.985384    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:16.985393    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:16.985401    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:16.985406    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:16.985416    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:16.985422    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:16.985428    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:16.985438    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:16.985447    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:16.985453    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:16.985461    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:16.985468    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:16.985474    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:16.985488    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:18.986912    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 9
	I0728 19:02:18.986925    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:18.986991    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:18.987815    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:18.987863    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:18.987879    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:18.987892    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:18.987899    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:18.987906    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:18.987920    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:18.987927    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:18.987935    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:18.987942    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:18.987950    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:18.987958    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:18.987965    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:18.987972    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:18.987979    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:18.987986    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:18.987992    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:18.987998    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:18.988006    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:20.989243    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 10
	I0728 19:02:20.989260    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:20.989324    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:20.990073    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:20.990118    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:20.990128    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:20.990147    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:20.990156    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:20.990179    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:20.990192    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:20.990202    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:20.990211    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:20.990222    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:20.990236    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:20.990246    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:20.990252    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:20.990259    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:20.990270    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:20.990277    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:20.990283    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:20.990291    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:20.990299    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:22.991713    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 11
	I0728 19:02:22.991731    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:22.991852    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:22.992637    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:22.992670    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:22.992679    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:22.992688    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:22.992695    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:22.992718    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:22.992729    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:22.992738    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:22.992754    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:22.992763    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:22.992771    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:22.992778    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:22.992785    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:22.992792    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:22.992799    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:22.992818    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:22.992830    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:22.992843    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:22.992850    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:24.993329    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 12
	I0728 19:02:24.993346    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:24.993379    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:24.994311    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:24.994365    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:24.994377    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:24.994386    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:24.994395    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:24.994405    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:24.994411    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:24.994429    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:24.994438    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:24.994446    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:24.994455    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:24.994465    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:24.994473    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:24.994480    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:24.994488    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:24.994495    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:24.994514    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:24.994521    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:24.994529    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:26.996534    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 13
	I0728 19:02:26.996552    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:26.996649    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:26.997463    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:26.997477    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:26.997483    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:26.997491    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:26.997497    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:26.997511    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:26.997518    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:26.997526    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:26.997536    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:26.997544    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:26.997552    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:26.997559    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:26.997594    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:26.997602    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:26.997610    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:26.997617    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:26.997629    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:26.997639    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:26.997653    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:28.998962    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 14
	I0728 19:02:28.998977    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:28.998986    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:28.999796    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:28.999837    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:28.999853    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:28.999866    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:28.999874    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:28.999881    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:28.999902    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:28.999912    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:28.999921    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:28.999972    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:28.999996    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:29.000004    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:29.000010    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:29.000017    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:29.000026    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:29.000032    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:29.000040    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:29.000049    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:29.000057    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:31.001661    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 15
	I0728 19:02:31.001677    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:31.001761    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:31.002769    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:31.002826    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:31.002838    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:31.002846    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:31.002856    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:31.002881    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:31.002902    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:31.002914    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:31.002929    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:31.002942    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:31.002961    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:31.002972    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:31.002983    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:31.002991    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:31.003006    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:31.003015    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:31.003022    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:31.003029    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:31.003043    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:33.003868    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 16
	I0728 19:02:33.003883    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:33.003946    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:33.004695    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:33.004749    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:33.004763    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:33.004773    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:33.004810    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:33.004821    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:33.004828    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:33.004842    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:33.004849    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:33.004856    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:33.004865    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:33.004872    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:33.004878    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:33.004886    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:33.004896    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:33.004904    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:33.004920    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:33.004928    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:33.004937    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:35.006043    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 17
	I0728 19:02:35.006061    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:35.006133    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:35.006889    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:35.006942    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:35.006955    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:35.006965    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:35.006974    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:35.006982    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:35.006989    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:35.006996    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:35.007001    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:35.007015    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:35.007029    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:35.007038    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:35.007045    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:35.007056    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:35.007064    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:35.007080    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:35.007092    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:35.007105    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:35.007115    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:37.007338    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 18
	I0728 19:02:37.007354    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:37.007455    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:37.008257    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:37.008293    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:37.008302    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:37.008324    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:37.008331    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:37.008337    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:37.008346    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:37.008354    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:37.008361    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:37.008369    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:37.008379    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:37.008386    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:37.008392    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:37.008399    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:37.008406    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:37.008414    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:37.008421    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:37.008428    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:37.008436    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:39.009933    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 19
	I0728 19:02:39.009947    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:39.010029    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:39.010818    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:39.010855    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:39.010868    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:39.010879    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:39.010886    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:39.010894    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:39.010903    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:39.010910    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:39.010917    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:39.010922    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:39.010938    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:39.010945    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:39.010952    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:39.010980    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:39.010990    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:39.010997    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:39.011003    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:39.011018    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:39.011031    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:41.012296    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 20
	I0728 19:02:41.012307    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:41.012408    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:41.013222    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:41.013258    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:41.013269    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:41.013280    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:41.013289    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:41.013310    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:41.013326    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:41.013340    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:41.013361    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:41.013370    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:41.013378    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:41.013387    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:41.013394    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:41.013403    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:41.013411    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:41.013418    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:41.013426    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:41.013433    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:41.013441    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:43.015011    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 21
	I0728 19:02:43.015023    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:43.015161    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:43.015920    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:43.015977    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:43.015991    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:43.016016    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:43.016025    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:43.016033    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:43.016040    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:43.016054    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:43.016067    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:43.016076    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:43.016090    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:43.016108    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:43.016120    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:43.016129    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:43.016139    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:43.016157    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:43.016170    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:43.016186    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:43.016198    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:45.017383    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 22
	I0728 19:02:45.017396    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:45.017540    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:45.018398    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:45.018453    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:45.018471    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:45.018480    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:45.018489    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:45.018498    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:45.018504    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:45.018511    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:45.018517    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:45.018524    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:45.018530    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:45.018537    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:45.018545    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:45.018560    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:45.018613    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:45.018622    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:45.018630    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:45.018638    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:45.018644    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:47.019279    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 23
	I0728 19:02:47.019293    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:47.019350    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:47.020251    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:47.020297    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:47.020313    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:47.020350    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:47.020361    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:47.020370    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:47.020380    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:47.020394    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:47.020405    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:47.020427    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:47.020439    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:47.020447    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:47.020458    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:47.020465    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:47.020473    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:47.020482    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:47.020489    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:47.020498    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:47.020509    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:49.022485    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 24
	I0728 19:02:49.022510    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:49.022545    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:49.023320    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:49.023353    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:49.023363    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:49.023371    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:49.023378    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:49.023391    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:49.023401    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:49.023410    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:49.023415    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:49.023426    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:49.023435    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:49.023442    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:49.023450    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:49.023457    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:49.023465    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:49.023471    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:49.023480    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:49.023489    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:49.023497    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:51.025506    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 25
	I0728 19:02:51.025520    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:51.025564    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:51.026407    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:51.026447    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:51.026457    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:51.026466    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:51.026473    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:51.026496    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:51.026507    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:51.026515    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:51.026521    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:51.026527    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:51.026535    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:51.026543    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:51.026551    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:51.026558    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:51.026565    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:51.026583    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:51.026592    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:51.026598    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:51.026605    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:53.028597    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 26
	I0728 19:02:53.028613    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:53.028777    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:53.029569    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:53.029631    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:53.029643    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:53.029657    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:53.029664    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:53.029670    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:53.029677    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:53.029684    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:53.029692    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:53.029699    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:53.029705    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:53.029724    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:53.029735    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:53.029742    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:53.029751    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:53.029758    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:53.029767    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:53.029783    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:53.029795    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:55.031788    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 27
	I0728 19:02:55.031806    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:55.031886    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:55.032671    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:55.032721    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:55.032739    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:55.032764    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:55.032776    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:55.032786    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:55.032795    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:55.032802    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:55.032811    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:55.032818    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:55.032835    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:55.032843    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:55.032849    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:55.032856    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:55.032864    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:55.032872    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:55.032879    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:55.032886    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:55.032894    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:57.032898    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 28
	I0728 19:02:57.032914    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:57.033011    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:57.033781    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:57.033825    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:57.033835    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:57.033847    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:57.033861    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:57.033874    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:57.033889    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:57.033896    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:57.033904    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:57.033920    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:57.033933    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:57.033958    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:57.033970    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:57.033977    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:57.033985    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:57.033998    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:57.034011    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:57.034028    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:57.034039    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:02:59.035377    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Attempt 29
	I0728 19:02:59.035389    5252 main.go:141] libmachine: (offline-docker-461000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:59.035521    5252 main.go:141] libmachine: (offline-docker-461000) DBG | hyperkit pid from json: 5707
	I0728 19:02:59.036301    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Searching for ca:6f:ae:7c:ea:36 in /var/db/dhcpd_leases ...
	I0728 19:02:59.036335    5252 main.go:141] libmachine: (offline-docker-461000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:02:59.036343    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:02:59.036363    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:02:59.036373    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:02:59.036381    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:02:59.036388    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:02:59.036395    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:02:59.036405    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:02:59.036412    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:02:59.036420    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:02:59.036441    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:02:59.036459    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:02:59.036467    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:02:59.036475    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:02:59.036486    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:02:59.036497    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:02:59.036504    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:02:59.036512    5252 main.go:141] libmachine: (offline-docker-461000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:01.037241    5252 client.go:171] duration metric: took 1m0.79159872s to LocalClient.Create
	I0728 19:03:03.037958    5252 start.go:128] duration metric: took 1m2.823507656s to createHost
	I0728 19:03:03.037971    5252 start.go:83] releasing machines lock for "offline-docker-461000", held for 1m2.823597957s
	W0728 19:03:03.038083    5252 out.go:239] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-461000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:6f:ae:7c:ea:36
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-461000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:6f:ae:7c:ea:36
	I0728 19:03:03.100250    5252 out.go:177] 
	W0728 19:03:03.121325    5252 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:6f:ae:7c:ea:36
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:6f:ae:7c:ea:36
	W0728 19:03:03.121357    5252 out.go:239] * 
	* 
	W0728 19:03:03.122082    5252 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 19:03:03.184247    5252 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-461000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-28 19:03:03.290489 -0700 PDT m=+4625.023223520
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-461000 -n offline-docker-461000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-461000 -n offline-docker-461000: exit status 7 (79.331899ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 19:03:03.367934    5714 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0728 19:03:03.367958    5714 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-461000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-461000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-461000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-461000: (5.257792154s)
--- FAIL: TestOffline (195.25s)

                                                
                                    
x
+
TestCertOptions (251.7s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-760000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0728 19:09:37.107882    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:10:04.799076    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:10:50.075698    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 19:11:00.994316    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-760000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m6.02964698s)

                                                
                                                
-- stdout --
	* [cert-options-760000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-760000" primary control-plane node in "cert-options-760000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-760000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 66:96:3c:ff:17:80
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-760000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for da:70:9d:67:81:a6
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for da:70:9d:67:81:a6
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-760000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-760000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-760000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (159.070337ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-760000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-760000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-760000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-760000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-760000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (160.458519ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-760000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-760000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-760000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-28 19:12:30.032176 -0700 PDT m=+5191.773775471
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-760000 -n cert-options-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-760000 -n cert-options-760000: exit status 7 (77.293523ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 19:12:30.107813    5890 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0728 19:12:30.107836    5890 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-760000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-760000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-760000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-760000: (5.232643529s)
--- FAIL: TestCertOptions (251.70s)

                                                
                                    
x
+
TestCertExpiration (1759.74s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-672000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E0728 19:07:20.959566    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-672000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m6.381986175s)

                                                
                                                
-- stdout --
	* [cert-expiration-672000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-672000" primary control-plane node in "cert-expiration-672000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-672000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2e:ec:ca:9a:c7:1c
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-672000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8a:22:f6:b9:5d:f0
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8a:22:f6:b9:5d:f0
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-672000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-672000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0728 19:14:37.099914    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-672000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 80 (22m8.00389046s)

                                                
                                                
-- stdout --
	* [cert-expiration-672000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-672000" primary control-plane node in "cert-expiration-672000" cluster
	* Updating the running hyperkit "cert-expiration-672000" VM ...
	* Updating the running hyperkit "cert-expiration-672000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-672000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-672000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-672000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-672000" primary control-plane node in "cert-expiration-672000" cluster
	* Updating the running hyperkit "cert-expiration-672000" VM ...
	* Updating the running hyperkit "cert-expiration-672000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-672000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-28 19:36:34.76436 -0700 PDT m=+6636.500042681
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-672000 -n cert-expiration-672000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-672000 -n cert-expiration-672000: exit status 7 (79.694629ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 19:36:34.842304    7242 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0728 19:36:34.842328    7242 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-672000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-672000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-672000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-672000: (5.267810362s)
--- FAIL: TestCertExpiration (1759.74s)

                                                
                                    
x
+
TestDockerFlags (251.88s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-771000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0728 19:04:37.109807    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:04:37.116196    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:04:37.127924    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:04:37.148025    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:04:37.189397    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:04:37.270682    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:04:37.431605    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:04:37.753672    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:04:38.394045    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:04:39.674384    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:04:42.235318    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:04:47.355927    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:04:57.596118    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:05:18.077474    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:05:50.079218    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 19:05:59.039301    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:06:00.996477    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-771000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.139298795s)

                                                
                                                
-- stdout --
	* [docker-flags-771000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-771000" primary control-plane node in "docker-flags-771000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-771000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 19:04:11.820426    5754 out.go:291] Setting OutFile to fd 1 ...
	I0728 19:04:11.821115    5754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 19:04:11.821124    5754 out.go:304] Setting ErrFile to fd 2...
	I0728 19:04:11.821131    5754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 19:04:11.821710    5754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 19:04:11.823331    5754 out.go:298] Setting JSON to false
	I0728 19:04:11.846193    5754 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5622,"bootTime":1722213029,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 19:04:11.846289    5754 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 19:04:11.869142    5754 out.go:177] * [docker-flags-771000] minikube v1.33.1 on Darwin 14.5
	I0728 19:04:11.910151    5754 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 19:04:11.910192    5754 notify.go:220] Checking for updates...
	I0728 19:04:11.952182    5754 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 19:04:11.973055    5754 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 19:04:11.993273    5754 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 19:04:12.036028    5754 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 19:04:12.057225    5754 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 19:04:12.078728    5754 config.go:182] Loaded profile config "force-systemd-flag-925000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 19:04:12.078834    5754 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 19:04:12.107060    5754 out.go:177] * Using the hyperkit driver based on user configuration
	I0728 19:04:12.148219    5754 start.go:297] selected driver: hyperkit
	I0728 19:04:12.148233    5754 start.go:901] validating driver "hyperkit" against <nil>
	I0728 19:04:12.148243    5754 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 19:04:12.151243    5754 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 19:04:12.151368    5754 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 19:04:12.159936    5754 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 19:04:12.163883    5754 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:04:12.163907    5754 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 19:04:12.163936    5754 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 19:04:12.164138    5754 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0728 19:04:12.164192    5754 cni.go:84] Creating CNI manager for ""
	I0728 19:04:12.164210    5754 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 19:04:12.164215    5754 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 19:04:12.164282    5754 start.go:340] cluster config:
	{Name:docker-flags-771000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-771000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 19:04:12.164362    5754 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 19:04:12.206169    5754 out.go:177] * Starting "docker-flags-771000" primary control-plane node in "docker-flags-771000" cluster
	I0728 19:04:12.227161    5754 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 19:04:12.227205    5754 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 19:04:12.227225    5754 cache.go:56] Caching tarball of preloaded images
	I0728 19:04:12.227342    5754 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 19:04:12.227351    5754 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 19:04:12.227428    5754 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/docker-flags-771000/config.json ...
	I0728 19:04:12.227447    5754 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/docker-flags-771000/config.json: {Name:mk8481b0bdb96a45c49fe162449101a26eac84e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 19:04:12.227773    5754 start.go:360] acquireMachinesLock for docker-flags-771000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 19:05:08.968967    5754 start.go:364] duration metric: took 56.741571117s to acquireMachinesLock for "docker-flags-771000"
	I0728 19:05:08.969027    5754 start.go:93] Provisioning new machine with config: &{Name:docker-flags-771000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-771000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 19:05:08.969097    5754 start.go:125] createHost starting for "" (driver="hyperkit")
	I0728 19:05:08.990442    5754 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0728 19:05:08.990590    5754 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:05:08.990645    5754 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:05:08.999113    5754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53517
	I0728 19:05:08.999555    5754 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:05:09.000028    5754 main.go:141] libmachine: Using API Version  1
	I0728 19:05:09.000044    5754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:05:09.000337    5754 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:05:09.000460    5754 main.go:141] libmachine: (docker-flags-771000) Calling .GetMachineName
	I0728 19:05:09.000560    5754 main.go:141] libmachine: (docker-flags-771000) Calling .DriverName
	I0728 19:05:09.000653    5754 start.go:159] libmachine.API.Create for "docker-flags-771000" (driver="hyperkit")
	I0728 19:05:09.000676    5754 client.go:168] LocalClient.Create starting
	I0728 19:05:09.000712    5754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem
	I0728 19:05:09.000769    5754 main.go:141] libmachine: Decoding PEM data...
	I0728 19:05:09.000786    5754 main.go:141] libmachine: Parsing certificate...
	I0728 19:05:09.000850    5754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem
	I0728 19:05:09.000890    5754 main.go:141] libmachine: Decoding PEM data...
	I0728 19:05:09.000901    5754 main.go:141] libmachine: Parsing certificate...
	I0728 19:05:09.000919    5754 main.go:141] libmachine: Running pre-create checks...
	I0728 19:05:09.000927    5754 main.go:141] libmachine: (docker-flags-771000) Calling .PreCreateCheck
	I0728 19:05:09.001056    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:09.001230    5754 main.go:141] libmachine: (docker-flags-771000) Calling .GetConfigRaw
	I0728 19:05:09.055162    5754 main.go:141] libmachine: Creating machine...
	I0728 19:05:09.055185    5754 main.go:141] libmachine: (docker-flags-771000) Calling .Create
	I0728 19:05:09.055292    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:09.055449    5754 main.go:141] libmachine: (docker-flags-771000) DBG | I0728 19:05:09.055293    5767 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 19:05:09.055548    5754 main.go:141] libmachine: (docker-flags-771000) Downloading /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1006/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0728 19:05:09.239278    5754 main.go:141] libmachine: (docker-flags-771000) DBG | I0728 19:05:09.239178    5767 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/id_rsa...
	I0728 19:05:09.270417    5754 main.go:141] libmachine: (docker-flags-771000) DBG | I0728 19:05:09.270352    5767 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/docker-flags-771000.rawdisk...
	I0728 19:05:09.270433    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Writing magic tar header
	I0728 19:05:09.270442    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Writing SSH key tar header
	I0728 19:05:09.270767    5754 main.go:141] libmachine: (docker-flags-771000) DBG | I0728 19:05:09.270698    5767 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000 ...
	I0728 19:05:09.643982    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:09.644007    5754 main.go:141] libmachine: (docker-flags-771000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/hyperkit.pid
	I0728 19:05:09.644028    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Using UUID e43604f8-044f-4498-9cd2-02fe0fa87e3f
	I0728 19:05:09.670265    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Generated MAC da:a3:8d:66:c9:4c
	I0728 19:05:09.670282    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-771000
	I0728 19:05:09.670314    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e43604f8-044f-4498-9cd2-02fe0fa87e3f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011e330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0728 19:05:09.670341    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e43604f8-044f-4498-9cd2-02fe0fa87e3f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011e330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0728 19:05:09.670403    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e43604f8-044f-4498-9cd2-02fe0fa87e3f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/docker-flags-771000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/bzimage,/Users/jenkins/m
inikube-integration/19312-1006/.minikube/machines/docker-flags-771000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-771000"}
	I0728 19:05:09.670452    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e43604f8-044f-4498-9cd2-02fe0fa87e3f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/docker-flags-771000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags
-771000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-771000"
	I0728 19:05:09.670468    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 19:05:09.673514    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:09 DEBUG: hyperkit: Pid is 5768
	I0728 19:05:09.674098    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 0
	I0728 19:05:09.674113    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:09.674213    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:09.675475    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:09.675551    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:09.675563    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:09.675601    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:09.675617    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:09.675628    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:09.675638    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:09.675647    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:09.675657    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:09.675669    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:09.675688    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:09.675705    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:09.675723    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:09.675736    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:09.675750    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:09.675762    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:09.675776    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:09.675788    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:09.675800    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:09.681083    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 19:05:09.689189    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:09 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 19:05:09.690227    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 19:05:09.690248    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 19:05:09.690255    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 19:05:09.690261    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 19:05:10.066589    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 19:05:10.066601    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 19:05:10.181754    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 19:05:10.181770    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 19:05:10.181805    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 19:05:10.181824    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 19:05:10.182620    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 19:05:10.182638    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 19:05:11.676364    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 1
	I0728 19:05:11.676378    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:11.676470    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:11.677269    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:11.677294    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:11.677307    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:11.677317    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:11.677325    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:11.677333    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:11.677347    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:11.677358    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:11.677376    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:11.677388    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:11.677398    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:11.677406    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:11.677472    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:11.677499    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:11.677509    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:11.677516    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:11.677532    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:11.677546    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:11.677578    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:13.678940    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 2
	I0728 19:05:13.678960    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:13.678979    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:13.679762    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:13.679802    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:13.679815    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:13.679825    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:13.679858    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:13.679880    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:13.679892    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:13.679900    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:13.679905    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:13.679926    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:13.679940    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:13.679952    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:13.679961    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:13.679967    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:13.679975    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:13.679983    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:13.679994    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:13.680001    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:13.680006    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:15.559870    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:15 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0728 19:05:15.560017    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:15 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0728 19:05:15.560028    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:15 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0728 19:05:15.580081    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:05:15 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0728 19:05:15.680970    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 3
	I0728 19:05:15.680991    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:15.681158    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:15.682215    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:15.682297    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:15.682317    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:15.682327    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:15.682346    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:15.682358    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:15.682366    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:15.682375    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:15.682386    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:15.682407    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:15.682424    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:15.682446    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:15.682464    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:15.682475    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:15.682498    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:15.682507    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:15.682518    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:15.682529    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:15.682552    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:17.683706    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 4
	I0728 19:05:17.683722    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:17.683788    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:17.684612    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:17.684684    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:17.684695    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:17.684707    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:17.684715    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:17.684722    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:17.684729    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:17.684744    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:17.684753    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:17.684767    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:17.684781    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:17.684788    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:17.684797    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:17.684807    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:17.684815    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:17.684822    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:17.684831    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:17.684838    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:17.684844    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:19.685478    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 5
	I0728 19:05:19.685491    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:19.685644    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:19.686642    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:19.686705    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:19.686730    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:19.686742    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:19.686750    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:19.686761    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:19.686771    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:19.686779    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:19.686792    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:19.686807    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:19.686816    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:19.686825    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:19.686833    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:19.686842    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:19.686849    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:19.686857    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:19.686864    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:19.686872    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:19.686881    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:21.686853    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 6
	I0728 19:05:21.686905    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:21.686978    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:21.687764    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:21.687792    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:21.687805    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:21.687825    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:21.687833    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:21.687849    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:21.687867    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:21.687880    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:21.687891    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:21.687899    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:21.687908    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:21.687914    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:21.687922    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:21.687931    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:21.687939    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:21.687955    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:21.687967    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:21.687983    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:21.688006    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:23.688537    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 7
	I0728 19:05:23.688553    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:23.688668    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:23.689456    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:23.689498    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:23.689508    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:23.689528    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:23.689541    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:23.689554    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:23.689561    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:23.689568    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:23.689575    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:23.689589    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:23.689601    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:23.689612    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:23.689619    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:23.689631    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:23.689647    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:23.689654    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:23.689662    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:23.689671    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:23.689679    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:25.690095    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 8
	I0728 19:05:25.690111    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:25.690168    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:25.690939    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:25.691196    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:25.691206    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:25.691214    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:25.691221    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:25.691232    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:25.691239    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:25.691247    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:25.691254    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:25.691282    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:25.691294    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:25.691302    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:25.691310    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:25.691317    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:25.691326    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:25.691340    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:25.691352    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:25.691359    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:25.691368    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:27.693337    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 9
	I0728 19:05:27.693351    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:27.693448    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:27.694476    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:27.694513    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:27.694521    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:27.694535    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:27.694540    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:27.694557    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:27.694569    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:27.694577    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:27.694586    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:27.694592    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:27.694601    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:27.694611    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:27.694620    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:27.694627    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:27.694633    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:27.694651    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:27.694663    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:27.694671    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:27.694679    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:29.695663    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 10
	I0728 19:05:29.695680    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:29.695823    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:29.696581    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:29.696627    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:29.696637    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:29.696655    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:29.696669    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:29.696702    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:29.696717    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:29.696725    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:29.696733    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:29.696751    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:29.696762    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:29.696771    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:29.696779    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:29.696786    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:29.696791    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:29.696806    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:29.696818    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:29.696825    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:29.696833    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:31.698113    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 11
	I0728 19:05:31.698144    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:31.698229    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:31.699115    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:31.699166    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:31.699183    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:31.699193    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:31.699199    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:31.699225    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:31.699239    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:31.699247    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:31.699255    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:31.699270    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:31.699284    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:31.699293    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:31.699301    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:31.699313    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:31.699321    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:31.699330    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:31.699338    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:31.699345    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:31.699360    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:33.699756    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 12
	I0728 19:05:33.699772    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:33.699927    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:33.700884    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:33.700913    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:33.700921    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:33.700952    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:33.700966    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:33.700974    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:33.700986    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:33.700994    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:33.701004    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:33.701011    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:33.701019    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:33.701032    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:33.701042    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:33.701052    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:33.701069    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:33.701078    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:33.701085    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:33.701090    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:33.701097    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:35.701310    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 13
	I0728 19:05:35.701326    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:35.701430    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:35.702187    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:35.702249    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:35.702259    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:35.702268    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:35.702277    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:35.702288    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:35.702298    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:35.702317    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:35.702323    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:35.702330    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:35.702338    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:35.702348    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:35.702357    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:35.702365    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:35.702371    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:35.702384    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:35.702396    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:35.702413    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:35.702426    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:37.704397    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 14
	I0728 19:05:37.704411    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:37.704502    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:37.705320    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:37.705366    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:37.705384    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:37.705404    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:37.705417    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:37.705426    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:37.705435    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:37.705449    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:37.705457    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:37.705471    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:37.705482    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:37.705492    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:37.705502    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:37.705508    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:37.705516    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:37.705523    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:37.705532    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:37.705551    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:37.705564    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:39.706361    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 15
	I0728 19:05:39.706378    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:39.706497    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:39.707530    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:39.707558    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:39.707572    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:39.707582    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:39.707593    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:39.707602    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:39.707611    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:39.707634    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:39.707643    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:39.707650    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:39.707658    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:39.707676    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:39.707687    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:39.707697    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:39.707706    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:39.707722    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:39.707731    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:39.707738    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:39.707744    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:41.708199    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 16
	I0728 19:05:41.708215    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:41.708334    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:41.709144    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:41.709205    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:41.709217    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:41.709227    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:41.709238    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:41.709250    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:41.709259    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:41.709266    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:41.709272    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:41.709280    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:41.709296    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:41.709304    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:41.709325    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:41.709338    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:41.709348    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:41.709356    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:41.709364    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:41.709372    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:41.709381    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:43.709383    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 17
	I0728 19:05:43.709396    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:43.709465    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:43.710250    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:43.710323    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:43.710337    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:43.710344    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:43.710353    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:43.710360    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:43.710368    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:43.710375    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:43.710382    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:43.710390    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:43.710398    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:43.710405    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:43.710413    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:43.710425    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:43.710440    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:43.710456    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:43.710468    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:43.710477    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:43.710485    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:45.710764    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 18
	I0728 19:05:45.710779    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:45.710830    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:45.711625    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:45.711668    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:45.711681    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:45.711697    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:45.711705    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:45.711711    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:45.711717    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:45.711732    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:45.711758    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:45.711768    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:45.711774    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:45.711780    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:45.711798    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:45.711810    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:45.711819    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:45.711828    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:45.711837    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:45.711846    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:45.711855    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:47.712112    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 19
	I0728 19:05:47.712125    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:47.712222    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:47.713042    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:47.713078    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:47.713088    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:47.713097    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:47.713104    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:47.713112    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:47.713128    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:47.713136    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:47.713142    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:47.713158    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:47.713166    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:47.713177    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:47.713185    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:47.713192    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:47.713201    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:47.713217    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:47.713231    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:47.713240    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:47.713248    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:49.714181    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 20
	I0728 19:05:49.714194    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:49.714322    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:49.715095    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:49.715141    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:49.715159    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:49.715175    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:49.715184    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:49.715195    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:49.715204    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:49.715219    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:49.715232    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:49.715240    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:49.715248    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:49.715255    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:49.715263    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:49.715270    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:49.715279    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:49.715286    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:49.715292    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:49.715304    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:49.715316    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:51.716705    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 21
	I0728 19:05:51.716721    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:51.716755    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:51.717630    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:51.717681    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:51.717689    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:51.717699    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:51.717704    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:51.717710    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:51.717715    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:51.717721    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:51.717728    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:51.717747    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:51.717757    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:51.717766    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:51.717775    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:51.717783    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:51.717791    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:51.717798    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:51.717812    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:51.717820    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:51.717826    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:53.719662    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 22
	I0728 19:05:53.719681    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:53.719747    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:53.720681    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:53.720717    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:53.720725    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:53.720732    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:53.720738    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:53.720744    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:53.720751    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:53.720757    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:53.720764    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:53.720771    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:53.720777    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:53.720785    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:53.720792    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:53.720801    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:53.720815    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:53.720829    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:53.720851    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:53.720865    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:53.720882    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:55.721618    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 23
	I0728 19:05:55.721632    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:55.721711    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:55.722474    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:55.722579    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:55.722589    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:55.722595    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:55.722601    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:55.722607    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:55.722617    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:55.722624    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:55.722632    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:55.722638    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:55.722644    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:55.722650    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:55.722659    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:55.722667    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:55.722677    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:55.722686    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:55.722693    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:55.722700    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:55.722707    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:57.724758    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 24
	I0728 19:05:57.724770    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:57.724808    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:57.725688    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:57.725726    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:57.725737    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:57.725753    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:57.725760    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:57.725768    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:57.725774    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:57.725781    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:57.725789    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:57.725798    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:57.725806    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:57.725812    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:57.725818    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:57.725825    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:57.725833    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:57.725846    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:57.725854    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:57.725861    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:57.725868    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:59.726810    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 25
	I0728 19:05:59.726825    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:59.726916    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:05:59.727686    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:05:59.727736    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:59.727746    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:59.727754    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:59.727763    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:59.727779    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:59.727785    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:59.727792    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:59.727800    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:59.727807    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:59.727820    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:59.727827    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:59.727835    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:59.727842    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:59.727848    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:59.727854    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:59.727872    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:59.727879    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:59.727888    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:01.729878    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 26
	I0728 19:06:01.729894    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:01.730015    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:06:01.730796    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:06:01.730846    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:01.730857    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:01.730864    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:01.730871    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:01.730878    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:01.730885    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:01.730891    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:01.730897    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:01.730903    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:01.730910    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:01.730916    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:01.730922    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:01.730935    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:01.730947    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:01.730955    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:01.730963    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:01.730969    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:01.730976    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:03.731859    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 27
	I0728 19:06:03.731874    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:03.731894    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:06:03.732645    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:06:03.732706    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:03.732717    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:03.732730    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:03.732738    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:03.732748    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:03.732755    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:03.732763    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:03.732768    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:03.732774    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:03.732782    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:03.732789    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:03.732797    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:03.732804    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:03.732812    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:03.732823    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:03.732841    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:03.732854    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:03.732869    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:05.732870    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 28
	I0728 19:06:05.732894    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:05.732942    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:06:05.733727    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:06:05.733769    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:05.733781    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:05.733793    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:05.733801    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:05.733812    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:05.733818    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:05.733828    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:05.733837    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:05.733844    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:05.733852    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:05.733862    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:05.733870    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:05.733885    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:05.733897    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:05.733915    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:05.733930    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:05.733947    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:05.733956    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:07.734409    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 29
	I0728 19:06:07.734424    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:07.734484    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:06:07.735280    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for da:a3:8d:66:c9:4c in /var/db/dhcpd_leases ...
	I0728 19:06:07.735339    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:07.735350    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:07.735360    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:07.735368    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:07.735374    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:07.735382    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:07.735390    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:07.735397    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:07.735408    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:07.735419    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:07.735427    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:07.735434    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:07.735439    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:07.735446    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:07.735455    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:07.735462    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:07.735467    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:07.735487    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:09.735833    5754 client.go:171] duration metric: took 1m0.735584003s to LocalClient.Create
	I0728 19:06:11.736289    5754 start.go:128] duration metric: took 1m2.767628745s to createHost
	I0728 19:06:11.736303    5754 start.go:83] releasing machines lock for "docker-flags-771000", held for 1m2.767773186s
	W0728 19:06:11.736321    5754 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for da:a3:8d:66:c9:4c
	I0728 19:06:11.736645    5754 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:06:11.736669    5754 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:06:11.745539    5754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53519
	I0728 19:06:11.746003    5754 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:06:11.746491    5754 main.go:141] libmachine: Using API Version  1
	I0728 19:06:11.746521    5754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:06:11.746840    5754 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:06:11.747226    5754 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:06:11.747255    5754 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:06:11.756049    5754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53521
	I0728 19:06:11.756410    5754 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:06:11.756748    5754 main.go:141] libmachine: Using API Version  1
	I0728 19:06:11.756763    5754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:06:11.757000    5754 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:06:11.757207    5754 main.go:141] libmachine: (docker-flags-771000) Calling .GetState
	I0728 19:06:11.757300    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:11.757375    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:06:11.758322    5754 main.go:141] libmachine: (docker-flags-771000) Calling .DriverName
	I0728 19:06:11.779600    5754 out.go:177] * Deleting "docker-flags-771000" in hyperkit ...
	I0728 19:06:11.821404    5754 main.go:141] libmachine: (docker-flags-771000) Calling .Remove
	I0728 19:06:11.821527    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:11.821539    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:11.821608    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:06:11.822521    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:11.822586    5754 main.go:141] libmachine: (docker-flags-771000) DBG | waiting for graceful shutdown
	I0728 19:06:12.822974    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:12.823096    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:06:12.823951    5754 main.go:141] libmachine: (docker-flags-771000) DBG | waiting for graceful shutdown
	I0728 19:06:13.824931    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:13.825024    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:06:13.826799    5754 main.go:141] libmachine: (docker-flags-771000) DBG | waiting for graceful shutdown
	I0728 19:06:14.828134    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:14.828171    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:06:14.828763    5754 main.go:141] libmachine: (docker-flags-771000) DBG | waiting for graceful shutdown
	I0728 19:06:15.830850    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:15.830935    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:06:15.831494    5754 main.go:141] libmachine: (docker-flags-771000) DBG | waiting for graceful shutdown
	I0728 19:06:16.833664    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:16.833689    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5768
	I0728 19:06:16.834800    5754 main.go:141] libmachine: (docker-flags-771000) DBG | sending sigkill
	I0728 19:06:16.834810    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:16.845127    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:06:16 WARN : hyperkit: failed to read stderr: EOF
	I0728 19:06:16.845158    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:06:16 WARN : hyperkit: failed to read stdout: EOF
	W0728 19:06:16.867531    5754 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for da:a3:8d:66:c9:4c
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for da:a3:8d:66:c9:4c
	I0728 19:06:16.867551    5754 start.go:729] Will try again in 5 seconds ...
	I0728 19:06:21.868729    5754 start.go:360] acquireMachinesLock for docker-flags-771000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 19:07:14.591935    5754 start.go:364] duration metric: took 52.723556813s to acquireMachinesLock for "docker-flags-771000"
	I0728 19:07:14.591965    5754 start.go:93] Provisioning new machine with config: &{Name:docker-flags-771000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-771000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 19:07:14.592018    5754 start.go:125] createHost starting for "" (driver="hyperkit")
	I0728 19:07:14.634147    5754 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0728 19:07:14.634210    5754 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:07:14.634233    5754 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:07:14.642771    5754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53525
	I0728 19:07:14.643109    5754 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:07:14.643432    5754 main.go:141] libmachine: Using API Version  1
	I0728 19:07:14.643448    5754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:07:14.643689    5754 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:07:14.643821    5754 main.go:141] libmachine: (docker-flags-771000) Calling .GetMachineName
	I0728 19:07:14.643912    5754 main.go:141] libmachine: (docker-flags-771000) Calling .DriverName
	I0728 19:07:14.644019    5754 start.go:159] libmachine.API.Create for "docker-flags-771000" (driver="hyperkit")
	I0728 19:07:14.644036    5754 client.go:168] LocalClient.Create starting
	I0728 19:07:14.644062    5754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem
	I0728 19:07:14.644117    5754 main.go:141] libmachine: Decoding PEM data...
	I0728 19:07:14.644129    5754 main.go:141] libmachine: Parsing certificate...
	I0728 19:07:14.644173    5754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem
	I0728 19:07:14.644215    5754 main.go:141] libmachine: Decoding PEM data...
	I0728 19:07:14.644233    5754 main.go:141] libmachine: Parsing certificate...
	I0728 19:07:14.644246    5754 main.go:141] libmachine: Running pre-create checks...
	I0728 19:07:14.644252    5754 main.go:141] libmachine: (docker-flags-771000) Calling .PreCreateCheck
	I0728 19:07:14.644325    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:14.644350    5754 main.go:141] libmachine: (docker-flags-771000) Calling .GetConfigRaw
	I0728 19:07:14.675999    5754 main.go:141] libmachine: Creating machine...
	I0728 19:07:14.676008    5754 main.go:141] libmachine: (docker-flags-771000) Calling .Create
	I0728 19:07:14.676123    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:14.676254    5754 main.go:141] libmachine: (docker-flags-771000) DBG | I0728 19:07:14.676114    5787 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 19:07:14.676330    5754 main.go:141] libmachine: (docker-flags-771000) Downloading /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1006/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0728 19:07:15.092906    5754 main.go:141] libmachine: (docker-flags-771000) DBG | I0728 19:07:15.092853    5787 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/id_rsa...
	I0728 19:07:15.200039    5754 main.go:141] libmachine: (docker-flags-771000) DBG | I0728 19:07:15.199978    5787 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/docker-flags-771000.rawdisk...
	I0728 19:07:15.200057    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Writing magic tar header
	I0728 19:07:15.200078    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Writing SSH key tar header
	I0728 19:07:15.200403    5754 main.go:141] libmachine: (docker-flags-771000) DBG | I0728 19:07:15.200367    5787 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000 ...
	I0728 19:07:15.616506    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:15.616528    5754 main.go:141] libmachine: (docker-flags-771000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/hyperkit.pid
	I0728 19:07:15.616579    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Using UUID ee53e238-6e8b-4556-8403-4b4c46085b29
	I0728 19:07:15.641947    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Generated MAC ae:29:21:df:72:36
	I0728 19:07:15.641965    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-771000
	I0728 19:07:15.641999    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:15 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ee53e238-6e8b-4556-8403-4b4c46085b29", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0728 19:07:15.642041    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:15 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ee53e238-6e8b-4556-8403-4b4c46085b29", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0728 19:07:15.642089    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:15 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ee53e238-6e8b-4556-8403-4b4c46085b29", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/docker-flags-771000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/bzimage,/Users/jenkins/m
inikube-integration/19312-1006/.minikube/machines/docker-flags-771000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-771000"}
	I0728 19:07:15.642145    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:15 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ee53e238-6e8b-4556-8403-4b4c46085b29 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/docker-flags-771000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags
-771000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-771000"
	I0728 19:07:15.642195    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:15 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 19:07:15.645053    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:15 DEBUG: hyperkit: Pid is 5801
	I0728 19:07:15.646300    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 0
	I0728 19:07:15.646318    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:15.646415    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:15.647384    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:15.647456    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:15.647496    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:15.647534    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:15.647550    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:15.647585    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:15.647602    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:15.647615    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:15.647628    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:15.647640    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:15.647660    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:15.647691    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:15.647704    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:15.647716    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:15.647728    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:15.647739    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:15.647752    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:15.647763    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:15.647775    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:15.652700    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:15 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 19:07:15.660679    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:15 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/docker-flags-771000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 19:07:15.661705    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 19:07:15.661732    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 19:07:15.661744    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 19:07:15.661755    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 19:07:16.043806    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:16 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 19:07:16.043820    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:16 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 19:07:16.159063    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 19:07:16.159080    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 19:07:16.159093    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 19:07:16.159108    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 19:07:16.159970    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:16 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 19:07:16.159989    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:16 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 19:07:17.648611    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 1
	I0728 19:07:17.648628    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:17.648667    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:17.649478    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:17.649538    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:17.649552    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:17.649561    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:17.649569    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:17.649583    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:17.649593    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:17.649599    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:17.649606    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:17.649612    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:17.649636    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:17.649649    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:17.649665    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:17.649681    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:17.649694    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:17.649712    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:17.649725    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:17.649735    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:17.649744    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:19.649916    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 2
	I0728 19:07:19.649946    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:19.649983    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:19.650785    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:19.650820    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:19.650831    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:19.650853    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:19.650865    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:19.650872    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:19.650882    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:19.650890    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:19.650898    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:19.650905    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:19.650940    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:19.650952    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:19.650959    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:19.650970    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:19.650977    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:19.650983    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:19.650990    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:19.651001    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:19.651014    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:21.568622    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:21 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0728 19:07:21.568756    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:21 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0728 19:07:21.568765    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:21 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0728 19:07:21.588440    5754 main.go:141] libmachine: (docker-flags-771000) DBG | 2024/07/28 19:07:21 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0728 19:07:21.651017    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 3
	I0728 19:07:21.651046    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:21.651230    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:21.652651    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:21.652777    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:21.652799    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:21.652826    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:21.652844    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:21.652864    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:21.652876    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:21.652929    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:21.652950    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:21.652972    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:21.652991    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:21.653002    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:21.653013    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:21.653024    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:21.653035    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:21.653061    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:21.653083    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:21.653092    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:21.653103    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:23.653134    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 4
	I0728 19:07:23.653155    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:23.653257    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:23.654028    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:23.654074    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:23.654088    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:23.654099    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:23.654114    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:23.654132    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:23.654141    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:23.654156    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:23.654165    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:23.654173    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:23.654180    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:23.654187    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:23.654195    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:23.654201    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:23.654207    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:23.654214    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:23.654222    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:23.654228    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:23.654236    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:25.655549    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 5
	I0728 19:07:25.655563    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:25.655633    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:25.656390    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:25.656450    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:25.656461    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:25.656471    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:25.656479    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:25.656487    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:25.656493    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:25.656507    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:25.656514    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:25.656520    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:25.656528    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:25.656536    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:25.656543    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:25.656561    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:25.656572    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:25.656579    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:25.656588    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:25.656595    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:25.656603    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:27.657048    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 6
	I0728 19:07:27.657075    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:27.657152    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:27.657917    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:27.657964    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:27.657972    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:27.657980    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:27.657987    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:27.658021    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:27.658036    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:27.658044    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:27.658058    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:27.658088    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:27.658123    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:27.658138    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:27.658145    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:27.658153    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:27.658168    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:27.658176    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:27.658184    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:27.658192    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:27.658204    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:29.659001    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 7
	I0728 19:07:29.659015    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:29.659158    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:29.659953    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:29.660009    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:29.660017    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:29.660026    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:29.660032    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:29.660040    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:29.660049    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:29.660057    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:29.660080    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:29.660111    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:29.660123    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:29.660131    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:29.660141    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:29.660158    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:29.660174    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:29.660185    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:29.660194    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:29.660202    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:29.660211    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:31.662213    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 8
	I0728 19:07:31.662229    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:31.662239    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:31.663032    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:31.663074    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:31.663084    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:31.663094    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:31.663099    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:31.663117    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:31.663129    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:31.663148    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:31.663157    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:31.663165    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:31.663173    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:31.663188    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:31.663200    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:31.663209    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:31.663216    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:31.663222    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:31.663228    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:31.663237    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:31.663247    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:33.665219    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 9
	I0728 19:07:33.665232    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:33.665320    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:33.666176    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:33.666204    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:33.666212    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:33.666220    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:33.666227    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:33.666233    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:33.666239    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:33.666259    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:33.666273    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:33.666281    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:33.666287    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:33.666296    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:33.666308    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:33.666317    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:33.666334    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:33.666347    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:33.666356    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:33.666364    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:33.666379    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:35.668379    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 10
	I0728 19:07:35.668391    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:35.668461    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:35.669221    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:35.669285    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:35.669296    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:35.669306    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:35.669313    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:35.669323    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:35.669332    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:35.669347    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:35.669362    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:35.669371    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:35.669378    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:35.669387    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:35.669395    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:35.669404    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:35.669412    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:35.669419    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:35.669427    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:35.669437    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:35.669446    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:37.669882    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 11
	I0728 19:07:37.669902    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:37.669990    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:37.670838    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:37.670901    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:37.670911    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:37.670926    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:37.670941    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:37.670962    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:37.670971    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:37.670987    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:37.671000    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:37.671010    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:37.671017    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:37.671025    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:37.671033    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:37.671040    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:37.671048    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:37.671055    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:37.671063    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:37.671070    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:37.671076    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:39.671275    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 12
	I0728 19:07:39.671291    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:39.671342    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:39.672148    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:39.672201    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:39.672213    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:39.672235    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:39.672247    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:39.672256    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:39.672262    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:39.672268    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:39.672274    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:39.672290    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:39.672304    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:39.672313    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:39.672322    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:39.672333    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:39.672341    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:39.672348    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:39.672359    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:39.672367    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:39.672374    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:41.673119    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 13
	I0728 19:07:41.673134    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:41.673238    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:41.673983    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:41.674035    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:41.674043    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:41.674051    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:41.674061    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:41.674080    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:41.674105    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:41.674123    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:41.674133    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:41.674143    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:41.674152    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:41.674163    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:41.674172    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:41.674180    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:41.674187    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:41.674200    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:41.674210    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:41.674220    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:41.674228    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:43.675255    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 14
	I0728 19:07:43.675272    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:43.675355    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:43.676238    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:43.676283    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:43.676292    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:43.676303    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:43.676310    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:43.676316    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:43.676323    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:43.676329    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:43.676336    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:43.676343    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:43.676351    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:43.676362    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:43.676370    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:43.676380    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:43.676386    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:43.676395    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:43.676402    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:43.676409    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:43.676417    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:45.676456    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 15
	I0728 19:07:45.676472    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:45.676592    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:45.677349    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:45.677399    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:45.677410    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:45.677429    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:45.677442    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:45.677457    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:45.677467    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:45.677481    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:45.677494    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:45.677502    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:45.677516    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:45.677523    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:45.677530    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:45.677536    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:45.677544    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:45.677561    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:45.677570    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:45.677578    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:45.677586    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:47.678752    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 16
	I0728 19:07:47.678765    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:47.678865    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:47.679890    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:47.679904    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:47.679911    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:47.679920    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:47.679936    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:47.679943    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:47.679950    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:47.679957    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:47.679965    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:47.679972    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:47.679981    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:47.679999    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:47.680014    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:47.680028    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:47.680036    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:47.680044    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:47.680052    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:47.680059    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:47.680064    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:49.681291    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 17
	I0728 19:07:49.681303    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:49.681382    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:49.682429    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:49.682469    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:49.682483    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:49.682505    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:49.682517    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:49.682533    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:49.682548    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:49.682561    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:49.682570    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:49.682584    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:49.682597    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:49.682610    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:49.682623    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:49.682631    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:49.682638    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:49.682652    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:49.682660    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:49.682669    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:49.682677    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:51.683249    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 18
	I0728 19:07:51.683265    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:51.683393    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:51.684210    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:51.684251    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:51.684263    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:51.684281    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:51.684292    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:51.684309    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:51.684318    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:51.684328    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:51.684337    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:51.684347    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:51.684355    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:51.684362    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:51.684370    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:51.684377    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:51.684383    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:51.684389    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:51.684396    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:51.684404    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:51.684412    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:53.686255    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 19
	I0728 19:07:53.686268    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:53.686345    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:53.687115    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:53.687169    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:53.687181    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:53.687202    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:53.687212    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:53.687219    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:53.687228    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:53.687235    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:53.687246    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:53.687260    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:53.687269    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:53.687275    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:53.687282    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:53.687292    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:53.687300    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:53.687307    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:53.687315    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:53.687323    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:53.687329    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:55.687352    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 20
	I0728 19:07:55.687385    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:55.687462    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:55.688217    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:55.688266    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:55.688284    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:55.688306    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:55.688319    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:55.688335    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:55.688348    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:55.688357    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:55.688364    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:55.688371    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:55.688379    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:55.688390    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:55.688405    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:55.688415    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:55.688424    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:55.688433    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:55.688439    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:55.688453    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:55.688460    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:57.689645    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 21
	I0728 19:07:57.689661    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:57.689749    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:57.690567    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:57.690613    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:57.690622    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:57.690630    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:57.690637    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:57.690645    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:57.690658    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:57.690667    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:57.690673    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:57.690683    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:57.690692    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:57.690700    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:57.690723    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:57.690744    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:57.690753    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:57.690761    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:57.690770    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:57.690782    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:57.690790    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:59.692238    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 22
	I0728 19:07:59.692338    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:59.692393    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:07:59.693171    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:07:59.693211    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:59.693219    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:59.693229    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:59.693236    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:59.693243    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:59.693249    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:59.693256    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:59.693273    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:59.693285    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:59.693295    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:59.693303    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:59.693311    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:59.693334    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:59.693351    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:59.693365    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:59.693372    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:59.693380    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:59.693389    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:08:01.694618    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 23
	I0728 19:08:01.694641    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:08:01.694740    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:08:01.695614    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:08:01.695656    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:08:01.695667    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:08:01.695677    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:08:01.695684    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:08:01.695698    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:08:01.695705    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:08:01.695712    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:08:01.695718    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:08:01.695727    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:08:01.695733    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:08:01.695742    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:08:01.695749    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:08:01.695755    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:08:01.695762    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:08:01.695768    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:08:01.695774    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:08:01.695781    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:08:01.695788    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:08:03.697862    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 24
	I0728 19:08:03.697879    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:08:03.697988    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:08:03.698765    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:08:03.698810    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:08:03.698826    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:08:03.698837    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:08:03.698843    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:08:03.698850    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:08:03.698856    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:08:03.698862    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:08:03.698868    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:08:03.698875    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:08:03.698882    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:08:03.698887    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:08:03.698910    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:08:03.698922    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:08:03.698940    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:08:03.698955    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:08:03.698968    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:08:03.698976    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:08:03.698984    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:08:05.699951    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 25
	I0728 19:08:05.699965    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:08:05.700008    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:08:05.700968    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:08:05.701014    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:08:05.701028    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:08:05.701043    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:08:05.701053    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:08:05.701062    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:08:05.701067    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:08:05.701084    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:08:05.701096    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:08:05.701105    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:08:05.701111    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:08:05.701118    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:08:05.701143    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:08:05.701154    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:08:05.701163    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:08:05.701174    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:08:05.701182    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:08:05.701189    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:08:05.701195    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:08:07.702550    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 26
	I0728 19:08:07.702567    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:08:07.702628    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:08:07.703382    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:08:07.703451    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:08:07.703462    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:08:07.703475    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:08:07.703482    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:08:07.703490    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:08:07.703497    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:08:07.703503    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:08:07.703512    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:08:07.703528    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:08:07.703540    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:08:07.703560    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:08:07.703574    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:08:07.703583    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:08:07.703593    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:08:07.703604    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:08:07.703615    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:08:07.703623    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:08:07.703631    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:08:09.705548    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 27
	I0728 19:08:09.705565    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:08:09.705634    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:08:09.706399    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:08:09.706453    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:08:09.706462    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:08:09.706473    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:08:09.706479    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:08:09.706486    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:08:09.706492    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:08:09.706498    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:08:09.706503    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:08:09.706520    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:08:09.706531    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:08:09.706539    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:08:09.706548    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:08:09.706557    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:08:09.706566    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:08:09.706574    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:08:09.706584    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:08:09.706591    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:08:09.706599    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:08:11.708032    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 28
	I0728 19:08:11.708046    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:08:11.708115    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:08:11.709137    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:08:11.709184    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:08:11.709200    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:08:11.709220    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:08:11.709229    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:08:11.709237    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:08:11.709243    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:08:11.709250    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:08:11.709258    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:08:11.709264    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:08:11.709270    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:08:11.709287    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:08:11.709300    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:08:11.709309    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:08:11.709314    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:08:11.709321    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:08:11.709330    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:08:11.709345    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:08:11.709356    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:08:13.710593    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Attempt 29
	I0728 19:08:13.711046    5754 main.go:141] libmachine: (docker-flags-771000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:08:13.711066    5754 main.go:141] libmachine: (docker-flags-771000) DBG | hyperkit pid from json: 5801
	I0728 19:08:13.711602    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Searching for ae:29:21:df:72:36 in /var/db/dhcpd_leases ...
	I0728 19:08:13.711641    5754 main.go:141] libmachine: (docker-flags-771000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:08:13.711721    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:08:13.711752    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:08:13.711777    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:08:13.711793    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:08:13.711800    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:08:13.711809    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:08:13.711816    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:08:13.711825    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:08:13.711834    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:08:13.711844    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:08:13.711888    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:08:13.711906    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:08:13.711924    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:08:13.711935    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:08:13.711947    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:08:13.711958    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:08:13.711970    5754 main.go:141] libmachine: (docker-flags-771000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:08:15.712576    5754 client.go:171] duration metric: took 1m1.068973543s to LocalClient.Create
	I0728 19:08:17.714715    5754 start.go:128] duration metric: took 1m3.123138802s to createHost
	I0728 19:08:17.714749    5754 start.go:83] releasing machines lock for "docker-flags-771000", held for 1m3.123238901s
	W0728 19:08:17.714854    5754 out.go:239] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-771000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:29:21:df:72:36
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-771000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:29:21:df:72:36
	I0728 19:08:17.777919    5754 out.go:177] 
	W0728 19:08:17.799108    5754 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:29:21:df:72:36
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:29:21:df:72:36
	W0728 19:08:17.799126    5754 out.go:239] * 
	* 
	W0728 19:08:17.799748    5754 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 19:08:17.862050    5754 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-771000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-771000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-771000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (175.613038ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-771000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-771000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-771000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-771000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (171.67835ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-771000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-771000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-771000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-28 19:08:18.315772 -0700 PDT m=+4940.050786607
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-771000 -n docker-flags-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-771000 -n docker-flags-771000: exit status 7 (80.01502ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 19:08:18.393776    5825 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0728 19:08:18.393800    5825 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-771000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-771000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-771000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-771000: (5.25569745s)
--- FAIL: TestDockerFlags (251.88s)

                                                
                                    
x
+
TestForceSystemdFlag (251.75s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-925000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-925000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.17718195s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-925000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-925000" primary control-plane node in "force-systemd-flag-925000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-925000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 19:03:08.680940    5724 out.go:291] Setting OutFile to fd 1 ...
	I0728 19:03:08.681215    5724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 19:03:08.681221    5724 out.go:304] Setting ErrFile to fd 2...
	I0728 19:03:08.681224    5724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 19:03:08.681405    5724 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 19:03:08.682864    5724 out.go:298] Setting JSON to false
	I0728 19:03:08.705938    5724 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5559,"bootTime":1722213029,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 19:03:08.706025    5724 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 19:03:08.729528    5724 out.go:177] * [force-systemd-flag-925000] minikube v1.33.1 on Darwin 14.5
	I0728 19:03:08.771433    5724 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 19:03:08.771455    5724 notify.go:220] Checking for updates...
	I0728 19:03:08.813317    5724 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 19:03:08.834401    5724 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 19:03:08.855485    5724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 19:03:08.876155    5724 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 19:03:08.896299    5724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 19:03:08.917818    5724 config.go:182] Loaded profile config "force-systemd-env-720000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 19:03:08.917919    5724 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 19:03:08.946178    5724 out.go:177] * Using the hyperkit driver based on user configuration
	I0728 19:03:08.987187    5724 start.go:297] selected driver: hyperkit
	I0728 19:03:08.987202    5724 start.go:901] validating driver "hyperkit" against <nil>
	I0728 19:03:08.987212    5724 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 19:03:08.990319    5724 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 19:03:08.990436    5724 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 19:03:08.998867    5724 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 19:03:09.002742    5724 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:03:09.002763    5724 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 19:03:09.002799    5724 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 19:03:09.002996    5724 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0728 19:03:09.003047    5724 cni.go:84] Creating CNI manager for ""
	I0728 19:03:09.003072    5724 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 19:03:09.003081    5724 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 19:03:09.003136    5724 start.go:340] cluster config:
	{Name:force-systemd-flag-925000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-925000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 19:03:09.003222    5724 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 19:03:09.046301    5724 out.go:177] * Starting "force-systemd-flag-925000" primary control-plane node in "force-systemd-flag-925000" cluster
	I0728 19:03:09.067241    5724 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 19:03:09.067276    5724 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 19:03:09.067295    5724 cache.go:56] Caching tarball of preloaded images
	I0728 19:03:09.067402    5724 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 19:03:09.067411    5724 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 19:03:09.067485    5724 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/force-systemd-flag-925000/config.json ...
	I0728 19:03:09.067504    5724 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/force-systemd-flag-925000/config.json: {Name:mke6fc49d3228fdbd1dc624e7a72c4cd00e061e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 19:03:09.067853    5724 start.go:360] acquireMachinesLock for force-systemd-flag-925000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 19:04:05.988992    5724 start.go:364] duration metric: took 56.921536114s to acquireMachinesLock for "force-systemd-flag-925000"
	I0728 19:04:05.989065    5724 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-925000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 19:04:05.989120    5724 start.go:125] createHost starting for "" (driver="hyperkit")
	I0728 19:04:06.031346    5724 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0728 19:04:06.031467    5724 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:04:06.031503    5724 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:04:06.040093    5724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53497
	I0728 19:04:06.040475    5724 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:04:06.040899    5724 main.go:141] libmachine: Using API Version  1
	I0728 19:04:06.040909    5724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:04:06.041114    5724 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:04:06.041232    5724 main.go:141] libmachine: (force-systemd-flag-925000) Calling .GetMachineName
	I0728 19:04:06.041340    5724 main.go:141] libmachine: (force-systemd-flag-925000) Calling .DriverName
	I0728 19:04:06.041449    5724 start.go:159] libmachine.API.Create for "force-systemd-flag-925000" (driver="hyperkit")
	I0728 19:04:06.041487    5724 client.go:168] LocalClient.Create starting
	I0728 19:04:06.041520    5724 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem
	I0728 19:04:06.041579    5724 main.go:141] libmachine: Decoding PEM data...
	I0728 19:04:06.041597    5724 main.go:141] libmachine: Parsing certificate...
	I0728 19:04:06.041655    5724 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem
	I0728 19:04:06.041693    5724 main.go:141] libmachine: Decoding PEM data...
	I0728 19:04:06.041701    5724 main.go:141] libmachine: Parsing certificate...
	I0728 19:04:06.041714    5724 main.go:141] libmachine: Running pre-create checks...
	I0728 19:04:06.041724    5724 main.go:141] libmachine: (force-systemd-flag-925000) Calling .PreCreateCheck
	I0728 19:04:06.041804    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:06.042016    5724 main.go:141] libmachine: (force-systemd-flag-925000) Calling .GetConfigRaw
	I0728 19:04:06.052427    5724 main.go:141] libmachine: Creating machine...
	I0728 19:04:06.052436    5724 main.go:141] libmachine: (force-systemd-flag-925000) Calling .Create
	I0728 19:04:06.052541    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:06.052667    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | I0728 19:04:06.052529    5735 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 19:04:06.052718    5724 main.go:141] libmachine: (force-systemd-flag-925000) Downloading /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1006/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0728 19:04:06.451209    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | I0728 19:04:06.451115    5735 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/id_rsa...
	I0728 19:04:06.491086    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | I0728 19:04:06.490999    5735 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/force-systemd-flag-925000.rawdisk...
	I0728 19:04:06.491102    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Writing magic tar header
	I0728 19:04:06.491119    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Writing SSH key tar header
	I0728 19:04:06.491420    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | I0728 19:04:06.491382    5735 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000 ...
	I0728 19:04:06.876617    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:06.876636    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/hyperkit.pid
	I0728 19:04:06.876703    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Using UUID fa36bb9a-4852-4af5-aaa2-c6c6b7cbbc71
	I0728 19:04:06.903610    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Generated MAC a6:ff:93:24:46:2a
	I0728 19:04:06.903627    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-925000
	I0728 19:04:06.903677    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:06 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fa36bb9a-4852-4af5-aaa2-c6c6b7cbbc71", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0728 19:04:06.903717    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:06 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fa36bb9a-4852-4af5-aaa2-c6c6b7cbbc71", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0728 19:04:06.903770    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:06 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "fa36bb9a-4852-4af5-aaa2-c6c6b7cbbc71", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/force-systemd-flag-925000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/fo
rce-systemd-flag-925000/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-925000"}
	I0728 19:04:06.903818    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:06 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U fa36bb9a-4852-4af5-aaa2-c6c6b7cbbc71 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/force-systemd-flag-925000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/bzimage,/Users/jenkins/minikube-integr
ation/19312-1006/.minikube/machines/force-systemd-flag-925000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-925000"
	I0728 19:04:06.903827    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:06 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 19:04:06.906793    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:06 DEBUG: hyperkit: Pid is 5749
	I0728 19:04:06.907202    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 0
	I0728 19:04:06.907217    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:06.907333    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:06.908201    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:06.908235    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:06.908258    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:06.908272    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:06.908305    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:06.908321    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:06.908330    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:06.908338    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:06.908347    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:06.908358    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:06.908365    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:06.908375    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:06.908385    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:06.908392    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:06.908398    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:06.908419    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:06.908427    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:06.908436    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:06.908447    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:06.914405    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:06 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 19:04:06.922525    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:06 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 19:04:06.923459    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 19:04:06.923478    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 19:04:06.923495    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 19:04:06.923506    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 19:04:07.301892    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:07 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 19:04:07.301915    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:07 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 19:04:07.416483    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 19:04:07.416504    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 19:04:07.416552    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 19:04:07.416579    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 19:04:07.417386    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:07 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 19:04:07.417400    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:07 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 19:04:08.909098    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 1
	I0728 19:04:08.909116    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:08.909191    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:08.909960    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:08.910036    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:08.910050    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:08.910073    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:08.910083    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:08.910091    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:08.910098    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:08.910103    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:08.910111    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:08.910117    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:08.910124    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:08.910129    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:08.910137    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:08.910145    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:08.910151    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:08.910160    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:08.910166    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:08.910177    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:08.910186    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:10.910316    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 2
	I0728 19:04:10.910331    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:10.910378    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:10.911228    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:10.911282    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:10.911294    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:10.911304    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:10.911310    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:10.911317    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:10.911327    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:10.911333    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:10.911339    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:10.911345    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:10.911352    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:10.911359    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:10.911366    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:10.911375    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:10.911380    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:10.911387    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:10.911394    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:10.911403    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:10.911412    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:12.835039    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:12 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0728 19:04:12.835152    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:12 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0728 19:04:12.835161    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:12 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0728 19:04:12.856049    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:04:12 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0728 19:04:12.911821    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 3
	I0728 19:04:12.911849    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:12.912046    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:12.913491    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:12.913619    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:12.913642    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:12.913660    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:12.913674    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:12.913697    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:12.913722    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:12.913737    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:12.913754    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:12.913806    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:12.913826    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:12.913837    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:12.913864    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:12.913877    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:12.913908    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:12.913925    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:12.913935    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:12.913946    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:12.913965    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:14.913879    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 4
	I0728 19:04:14.913893    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:14.913957    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:14.914783    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:14.914818    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:14.914829    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:14.914839    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:14.914872    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:14.914879    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:14.914889    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:14.914896    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:14.914902    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:14.914910    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:14.914925    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:14.914938    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:14.914948    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:14.914956    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:14.914964    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:14.914982    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:14.914988    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:14.914995    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:14.915003    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:16.914970    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 5
	I0728 19:04:16.914985    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:16.915047    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:16.915824    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:16.915857    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:16.915865    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:16.915875    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:16.915882    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:16.915889    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:16.915896    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:16.915902    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:16.915908    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:16.915926    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:16.915940    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:16.915954    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:16.915967    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:16.915975    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:16.915985    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:16.915993    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:16.916001    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:16.916009    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:16.916028    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:18.917292    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 6
	I0728 19:04:18.917306    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:18.917367    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:18.918131    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:18.918182    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:18.918190    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:18.918219    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:18.918228    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:18.918239    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:18.918249    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:18.918256    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:18.918267    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:18.918275    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:18.918283    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:18.918292    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:18.918300    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:18.918307    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:18.918314    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:18.918321    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:18.918326    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:18.918343    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:18.918355    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:20.918433    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 7
	I0728 19:04:20.918449    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:20.918544    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:20.919346    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:20.919402    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:20.919411    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:20.919435    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:20.919446    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:20.919460    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:20.919472    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:20.919487    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:20.919496    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:20.919503    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:20.919509    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:20.919514    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:20.919520    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:20.919529    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:20.919536    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:20.919549    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:20.919557    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:20.919563    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:20.919572    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:22.920365    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 8
	I0728 19:04:22.920378    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:22.920448    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:22.921220    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:22.921279    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:22.921292    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:22.921312    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:22.921319    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:22.921326    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:22.921335    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:22.921345    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:22.921353    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:22.921360    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:22.921367    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:22.921379    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:22.921389    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:22.921396    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:22.921402    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:22.921411    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:22.921419    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:22.921426    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:22.921433    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:24.922499    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 9
	I0728 19:04:24.922513    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:24.922623    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:24.923342    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:24.923396    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:24.923411    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:24.923429    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:24.923440    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:24.923448    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:24.923458    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:24.923466    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:24.923472    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:24.923484    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:24.923505    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:24.923515    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:24.923531    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:24.923550    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:24.923562    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:24.923580    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:24.923591    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:24.923608    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:24.923616    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:26.925611    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 10
	I0728 19:04:26.925626    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:26.925666    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:26.926419    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:26.926466    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:26.926475    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:26.926488    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:26.926516    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:26.926528    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:26.926547    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:26.926574    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:26.926584    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:26.926591    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:26.926599    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:26.926619    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:26.926632    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:26.926639    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:26.926648    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:26.926658    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:26.926665    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:26.926671    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:26.926678    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:28.927232    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 11
	I0728 19:04:28.927252    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:28.927375    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:28.928193    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:28.928229    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:28.928237    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:28.928261    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:28.928270    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:28.928279    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:28.928290    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:28.928299    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:28.928305    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:28.928314    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:28.928327    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:28.928334    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:28.928341    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:28.928356    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:28.928369    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:28.928378    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:28.928385    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:28.928402    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:28.928415    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:30.930392    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 12
	I0728 19:04:30.930407    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:30.930481    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:30.931296    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:30.931341    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:30.931361    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:30.931382    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:30.931397    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:30.931408    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:30.931415    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:30.931426    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:30.931433    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:30.931446    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:30.931455    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:30.931478    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:30.931488    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:30.931503    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:30.931517    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:30.931528    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:30.931536    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:30.931543    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:30.931549    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:32.931959    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 13
	I0728 19:04:32.931973    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:32.932047    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:32.932844    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:32.932892    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:32.932905    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:32.932916    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:32.932927    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:32.932949    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:32.932959    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:32.932966    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:32.932984    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:32.932991    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:32.932998    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:32.933003    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:32.933012    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:32.933020    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:32.933026    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:32.933034    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:32.933040    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:32.933048    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:32.933057    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:34.934181    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 14
	I0728 19:04:34.934194    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:34.934272    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:34.935025    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:34.935135    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:34.935146    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:34.935153    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:34.935159    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:34.935173    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:34.935183    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:34.935193    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:34.935202    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:34.935210    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:34.935218    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:34.935225    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:34.935244    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:34.935252    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:34.935259    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:34.935272    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:34.935282    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:34.935289    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:34.935296    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:36.935675    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 15
	I0728 19:04:36.935690    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:36.935744    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:36.936574    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:36.936634    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:36.936646    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:36.936666    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:36.936674    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:36.936692    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:36.936703    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:36.936717    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:36.936730    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:36.936738    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:36.936746    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:36.936754    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:36.936764    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:36.936778    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:36.936792    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:36.936799    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:36.936807    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:36.936822    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:36.936834    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:38.937062    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 16
	I0728 19:04:38.937076    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:38.937138    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:38.937931    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:38.937975    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:38.937987    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:38.937998    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:38.938005    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:38.938012    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:38.938018    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:38.938024    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:38.938030    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:38.938046    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:38.938069    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:38.938076    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:38.938082    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:38.938101    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:38.938113    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:38.938126    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:38.938138    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:38.938150    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:38.938158    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:40.939395    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 17
	I0728 19:04:40.939412    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:40.939535    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:40.940294    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:40.940343    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:40.940354    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:40.940374    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:40.940381    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:40.940387    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:40.940394    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:40.940400    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:40.940406    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:40.940413    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:40.940422    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:40.940440    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:40.940452    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:40.940460    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:40.940469    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:40.940476    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:40.940489    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:40.940497    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:40.940511    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:42.940915    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 18
	I0728 19:04:42.940928    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:42.940997    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:42.941746    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:42.941784    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:42.941793    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:42.941811    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:42.941817    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:42.941830    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:42.941843    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:42.941866    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:42.941880    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:42.941890    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:42.941902    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:42.941912    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:42.941920    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:42.941927    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:42.941942    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:42.941949    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:42.941957    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:42.941972    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:42.941985    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:44.941978    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 19
	I0728 19:04:44.942004    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:44.942088    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:44.942855    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:44.942908    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:44.942923    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:44.942932    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:44.942939    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:44.942947    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:44.942953    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:44.942961    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:44.942967    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:44.942973    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:44.942981    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:44.942989    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:44.942997    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:44.943012    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:44.943025    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:44.943041    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:44.943055    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:44.943063    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:44.943070    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:46.945071    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 20
	I0728 19:04:46.945087    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:46.945125    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:46.945907    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:46.945964    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:46.945974    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:46.945989    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:46.946001    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:46.946033    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:46.946046    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:46.946056    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:46.946065    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:46.946073    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:46.946081    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:46.946095    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:46.946108    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:46.946115    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:46.946128    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:46.946145    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:46.946157    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:46.946169    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:46.946180    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:48.947037    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 21
	I0728 19:04:48.947049    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:48.947159    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:48.948210    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:48.948256    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:48.948266    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:48.948275    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:48.948282    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:48.948288    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:48.948309    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:48.948318    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:48.948326    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:48.948335    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:48.948351    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:48.948363    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:48.948371    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:48.948379    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:48.948387    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:48.948395    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:48.948410    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:48.948416    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:48.948424    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:50.950196    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 22
	I0728 19:04:50.950211    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:50.950283    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:50.951122    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:50.951158    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:50.951167    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:50.951176    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:50.951183    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:50.951202    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:50.951216    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:50.951225    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:50.951252    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:50.951274    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:50.951283    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:50.951297    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:50.951313    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:50.951321    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:50.951333    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:50.951343    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:50.951350    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:50.951356    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:50.951363    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:52.951389    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 23
	I0728 19:04:52.951405    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:52.951448    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:52.952263    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:52.952307    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:52.952316    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:52.952325    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:52.952334    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:52.952341    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:52.952359    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:52.952374    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:52.952386    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:52.952394    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:52.952400    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:52.952407    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:52.952423    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:52.952431    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:52.952439    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:52.952446    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:52.952453    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:52.952461    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:52.952468    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:54.953413    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 24
	I0728 19:04:54.953427    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:54.953523    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:54.954321    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:54.954343    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:54.954365    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:54.954375    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:54.954385    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:54.954392    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:54.954398    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:54.954405    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:54.954426    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:54.954438    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:54.954446    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:54.954453    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:54.954459    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:54.954466    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:54.954472    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:54.954479    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:54.954485    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:54.954493    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:54.954507    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:56.956313    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 25
	I0728 19:04:56.956327    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:56.956454    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:56.957388    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:56.957422    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:56.957431    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:56.957442    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:56.957450    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:56.957460    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:56.957476    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:56.957485    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:56.957496    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:56.957505    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:56.957513    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:56.957530    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:56.957547    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:56.957562    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:56.957574    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:56.957583    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:56.957590    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:56.957597    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:56.957602    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:58.958871    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 26
	I0728 19:04:58.958886    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:58.959001    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:04:58.959806    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:04:58.959859    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:58.959870    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:58.959879    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:58.959885    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:58.959891    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:58.959897    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:58.959904    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:58.959911    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:58.959917    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:58.959923    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:58.959930    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:58.959936    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:58.959943    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:58.959949    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:58.959956    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:58.959963    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:58.959970    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:58.959978    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:00.961049    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 27
	I0728 19:05:00.961065    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:00.961217    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:05:00.962270    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:05:00.962316    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:00.962344    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:00.962352    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:00.962368    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:00.962378    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:00.962387    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:00.962396    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:00.962419    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:00.962433    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:00.962441    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:00.962451    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:00.962462    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:00.962475    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:00.962486    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:00.962494    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:00.962501    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:00.962509    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:00.962524    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:02.964491    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 28
	I0728 19:05:02.964504    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:02.964579    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:05:02.965566    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:05:02.965613    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:02.965625    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:02.965635    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:02.965642    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:02.965675    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:02.965704    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:02.965715    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:02.965723    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:02.965731    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:02.965738    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:02.965745    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:02.965764    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:02.965777    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:02.965789    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:02.965797    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:02.965805    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:02.965810    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:02.965824    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:04.966962    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 29
	I0728 19:05:04.966975    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:04.967079    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:05:04.968024    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for a6:ff:93:24:46:2a in /var/db/dhcpd_leases ...
	I0728 19:05:04.968065    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:05:04.968076    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:05:04.968082    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:05:04.968115    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:05:04.968129    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:05:04.968138    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:05:04.968146    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:05:04.968154    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:05:04.968164    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:05:04.968172    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:05:04.968180    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:05:04.968188    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:05:04.968194    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:05:04.968204    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:05:04.968214    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:05:04.968222    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:05:04.968231    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:05:04.968238    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:05:06.968552    5724 client.go:171] duration metric: took 1m0.92749143s to LocalClient.Create
	I0728 19:05:08.968839    5724 start.go:128] duration metric: took 1m2.98016238s to createHost
	I0728 19:05:08.968867    5724 start.go:83] releasing machines lock for "force-systemd-flag-925000", held for 1m2.980306501s
	W0728 19:05:08.968883    5724 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a6:ff:93:24:46:2a
	I0728 19:05:08.969233    5724 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:05:08.969252    5724 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:05:08.978318    5724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53513
	I0728 19:05:08.978749    5724 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:05:08.979097    5724 main.go:141] libmachine: Using API Version  1
	I0728 19:05:08.979106    5724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:05:08.979386    5724 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:05:08.979837    5724 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:05:08.979862    5724 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:05:08.988483    5724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53515
	I0728 19:05:08.988883    5724 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:05:08.989383    5724 main.go:141] libmachine: Using API Version  1
	I0728 19:05:08.989433    5724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:05:08.989703    5724 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:05:08.989823    5724 main.go:141] libmachine: (force-systemd-flag-925000) Calling .GetState
	I0728 19:05:08.989910    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:08.989991    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:05:08.990927    5724 main.go:141] libmachine: (force-systemd-flag-925000) Calling .DriverName
	I0728 19:05:09.012485    5724 out.go:177] * Deleting "force-systemd-flag-925000" in hyperkit ...
	I0728 19:05:09.055175    5724 main.go:141] libmachine: (force-systemd-flag-925000) Calling .Remove
	I0728 19:05:09.055324    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:09.055335    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:09.055383    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:05:09.056289    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:09.056348    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | waiting for graceful shutdown
	I0728 19:05:10.058454    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:10.058497    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:05:10.059432    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | waiting for graceful shutdown
	I0728 19:05:11.060689    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:11.060846    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:05:11.062452    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | waiting for graceful shutdown
	I0728 19:05:12.062668    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:12.062718    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:05:12.063443    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | waiting for graceful shutdown
	I0728 19:05:13.063740    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:13.063873    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:05:13.064510    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | waiting for graceful shutdown
	I0728 19:05:14.065269    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:05:14.065383    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5749
	I0728 19:05:14.066405    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | sending sigkill
	I0728 19:05:14.066416    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0728 19:05:14.077411    5724 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a6:ff:93:24:46:2a
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a6:ff:93:24:46:2a
	I0728 19:05:14.077438    5724 start.go:729] Will try again in 5 seconds ...
	I0728 19:05:14.089560    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:05:14 WARN : hyperkit: failed to read stdout: EOF
	I0728 19:05:14.089581    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:05:14 WARN : hyperkit: failed to read stderr: EOF
	I0728 19:05:19.079502    5724 start.go:360] acquireMachinesLock for force-systemd-flag-925000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 19:06:11.736349    5724 start.go:364] duration metric: took 52.657192739s to acquireMachinesLock for "force-systemd-flag-925000"
	I0728 19:06:11.736375    5724 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-925000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 19:06:11.736426    5724 start.go:125] createHost starting for "" (driver="hyperkit")
	I0728 19:06:11.757690    5724 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0728 19:06:11.757761    5724 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:06:11.757782    5724 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:06:11.766465    5724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53523
	I0728 19:06:11.766831    5724 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:06:11.767215    5724 main.go:141] libmachine: Using API Version  1
	I0728 19:06:11.767239    5724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:06:11.767454    5724 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:06:11.767567    5724 main.go:141] libmachine: (force-systemd-flag-925000) Calling .GetMachineName
	I0728 19:06:11.767654    5724 main.go:141] libmachine: (force-systemd-flag-925000) Calling .DriverName
	I0728 19:06:11.767750    5724 start.go:159] libmachine.API.Create for "force-systemd-flag-925000" (driver="hyperkit")
	I0728 19:06:11.767768    5724 client.go:168] LocalClient.Create starting
	I0728 19:06:11.767794    5724 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem
	I0728 19:06:11.767848    5724 main.go:141] libmachine: Decoding PEM data...
	I0728 19:06:11.767860    5724 main.go:141] libmachine: Parsing certificate...
	I0728 19:06:11.767902    5724 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem
	I0728 19:06:11.767941    5724 main.go:141] libmachine: Decoding PEM data...
	I0728 19:06:11.767952    5724 main.go:141] libmachine: Parsing certificate...
	I0728 19:06:11.767967    5724 main.go:141] libmachine: Running pre-create checks...
	I0728 19:06:11.767972    5724 main.go:141] libmachine: (force-systemd-flag-925000) Calling .PreCreateCheck
	I0728 19:06:11.768043    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:11.768080    5724 main.go:141] libmachine: (force-systemd-flag-925000) Calling .GetConfigRaw
	I0728 19:06:11.801055    5724 main.go:141] libmachine: Creating machine...
	I0728 19:06:11.801063    5724 main.go:141] libmachine: (force-systemd-flag-925000) Calling .Create
	I0728 19:06:11.801148    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:11.801313    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | I0728 19:06:11.801139    5783 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 19:06:11.801360    5724 main.go:141] libmachine: (force-systemd-flag-925000) Downloading /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1006/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0728 19:06:12.029172    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | I0728 19:06:12.029083    5783 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/id_rsa...
	I0728 19:06:12.111480    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | I0728 19:06:12.111404    5783 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/force-systemd-flag-925000.rawdisk...
	I0728 19:06:12.111502    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Writing magic tar header
	I0728 19:06:12.111516    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Writing SSH key tar header
	I0728 19:06:12.112103    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | I0728 19:06:12.112059    5783 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000 ...
	I0728 19:06:12.488867    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:12.488889    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/hyperkit.pid
	I0728 19:06:12.488900    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Using UUID 8363cedd-4553-4588-a364-7b8316e958a3
	I0728 19:06:12.513900    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Generated MAC 3e:41:1a:b9:71:cb
	I0728 19:06:12.513919    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-925000
	I0728 19:06:12.513961    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:12 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8363cedd-4553-4588-a364-7b8316e958a3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0728 19:06:12.513998    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:12 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8363cedd-4553-4588-a364-7b8316e958a3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0728 19:06:12.514048    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:12 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8363cedd-4553-4588-a364-7b8316e958a3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/force-systemd-flag-925000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/fo
rce-systemd-flag-925000/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-925000"}
	I0728 19:06:12.514085    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:12 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8363cedd-4553-4588-a364-7b8316e958a3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/force-systemd-flag-925000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/bzimage,/Users/jenkins/minikube-integr
ation/19312-1006/.minikube/machines/force-systemd-flag-925000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-925000"
	I0728 19:06:12.514093    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:12 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 19:06:12.516995    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:12 DEBUG: hyperkit: Pid is 5784
	I0728 19:06:12.518072    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 0
	I0728 19:06:12.518090    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:12.518189    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:12.519123    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:12.519268    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:12.519284    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:12.519299    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:12.519317    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:12.519327    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:12.519353    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:12.519362    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:12.519369    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:12.519379    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:12.519391    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:12.519400    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:12.519424    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:12.519443    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:12.519465    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:12.519477    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:12.519491    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:12.519513    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:12.519534    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:12.524942    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:12 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 19:06:12.533199    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:12 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-flag-925000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 19:06:12.534076    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 19:06:12.534088    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 19:06:12.534095    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 19:06:12.534101    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 19:06:12.911781    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 19:06:12.911797    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 19:06:13.026426    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 19:06:13.026442    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 19:06:13.026466    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 19:06:13.026490    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 19:06:13.027346    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 19:06:13.027357    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 19:06:14.521374    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 1
	I0728 19:06:14.521390    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:14.521415    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:14.522269    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:14.522314    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:14.522329    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:14.522346    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:14.522363    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:14.522377    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:14.522388    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:14.522405    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:14.522417    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:14.522425    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:14.522433    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:14.522441    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:14.522448    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:14.522463    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:14.522478    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:14.522488    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:14.522496    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:14.522511    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:14.522521    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:16.522905    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 2
	I0728 19:06:16.522923    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:16.522997    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:16.523813    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:16.523855    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:16.523880    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:16.523894    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:16.523902    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:16.523916    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:16.523929    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:16.523936    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:16.523945    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:16.523953    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:16.523965    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:16.523973    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:16.523982    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:16.523996    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:16.524034    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:16.524044    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:16.524051    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:16.524059    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:16.524081    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:18.457282    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0728 19:06:18.457466    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0728 19:06:18.457475    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0728 19:06:18.478222    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | 2024/07/28 19:06:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0728 19:06:18.526151    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 3
	I0728 19:06:18.526180    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:18.526446    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:18.527870    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:18.527988    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:18.528029    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:18.528057    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:18.528090    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:18.528100    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:18.528112    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:18.528122    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:18.528164    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:18.528177    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:18.528186    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:18.528195    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:18.528213    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:18.528230    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:18.528241    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:18.528252    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:18.528285    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:18.528312    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:18.528330    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:20.530136    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 4
	I0728 19:06:20.530150    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:20.530234    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:20.531007    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:20.531069    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:20.531089    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:20.531098    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:20.531109    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:20.531115    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:20.531122    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:20.531133    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:20.531139    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:20.531147    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:20.531154    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:20.531160    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:20.531169    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:20.531177    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:20.531184    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:20.531191    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:20.531207    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:20.531218    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:20.531228    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:22.532436    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 5
	I0728 19:06:22.532451    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:22.532555    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:22.533393    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:22.533433    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:22.533445    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:22.533459    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:22.533470    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:22.533484    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:22.533496    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:22.533503    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:22.533510    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:22.533519    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:22.533526    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:22.533534    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:22.533557    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:22.533569    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:22.533576    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:22.533583    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:22.533596    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:22.533608    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:22.533628    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:24.535679    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 6
	I0728 19:06:24.535693    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:24.535754    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:24.536692    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:24.536704    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:24.536711    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:24.536719    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:24.536724    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:24.536736    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:24.536742    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:24.536750    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:24.536763    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:24.536771    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:24.536777    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:24.536791    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:24.536805    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:24.536813    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:24.536824    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:24.536837    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:24.536846    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:24.536853    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:24.536859    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:26.537156    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 7
	I0728 19:06:26.537169    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:26.537212    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:26.538045    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:26.538071    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:26.538087    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:26.538096    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:26.538112    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:26.538133    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:26.538144    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:26.538154    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:26.538161    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:26.538169    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:26.538176    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:26.538207    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:26.538222    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:26.538238    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:26.538252    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:26.538259    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:26.538268    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:26.538275    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:26.538286    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:28.540229    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 8
	I0728 19:06:28.540244    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:28.540352    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:28.541147    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:28.541182    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:28.541194    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:28.541202    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:28.541208    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:28.541216    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:28.541225    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:28.541242    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:28.541250    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:28.541258    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:28.541268    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:28.541276    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:28.541283    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:28.541290    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:28.541303    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:28.541319    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:28.541331    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:28.541339    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:28.541345    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:30.541352    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 9
	I0728 19:06:30.541366    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:30.541446    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:30.542242    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:30.542270    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:30.542285    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:30.542294    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:30.542301    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:30.542308    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:30.542314    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:30.542320    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:30.542327    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:30.542333    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:30.542341    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:30.542348    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:30.542356    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:30.542365    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:30.542372    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:30.542388    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:30.542400    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:30.542414    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:30.542423    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:32.544430    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 10
	I0728 19:06:32.544444    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:32.544551    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:32.545323    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:32.545362    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:32.545371    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:32.545378    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:32.545386    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:32.545395    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:32.545403    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:32.545419    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:32.545430    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:32.545447    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:32.545455    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:32.545462    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:32.545470    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:32.545477    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:32.545483    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:32.545491    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:32.545505    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:32.545518    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:32.545527    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:34.547499    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 11
	I0728 19:06:34.547514    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:34.547602    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:34.548437    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:34.548489    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:34.548501    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:34.548530    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:34.548543    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:34.548554    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:34.548564    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:34.548583    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:34.548596    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:34.548613    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:34.548621    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:34.548629    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:34.548637    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:34.548653    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:34.548667    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:34.548677    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:34.548685    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:34.548693    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:34.548702    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:36.549243    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 12
	I0728 19:06:36.549256    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:36.549330    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:36.550104    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:36.550161    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:36.550172    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:36.550186    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:36.550194    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:36.550206    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:36.550213    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:36.550220    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:36.550228    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:36.550235    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:36.550243    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:36.550250    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:36.550257    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:36.550272    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:36.550287    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:36.550299    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:36.550307    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:36.550314    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:36.550322    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:38.551449    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 13
	I0728 19:06:38.551463    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:38.551540    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:38.552323    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:38.552379    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:38.552392    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:38.552403    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:38.552414    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:38.552429    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:38.552444    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:38.552458    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:38.552466    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:38.552473    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:38.552481    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:38.552493    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:38.552504    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:38.552511    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:38.552519    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:38.552525    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:38.552533    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:38.552540    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:38.552548    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:40.553385    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 14
	I0728 19:06:40.553400    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:40.553464    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:40.554255    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:40.554300    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:40.554309    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:40.554326    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:40.554334    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:40.554340    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:40.554346    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:40.554361    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:40.554376    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:40.554385    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:40.554392    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:40.554409    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:40.554417    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:40.554424    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:40.554432    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:40.554439    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:40.554447    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:40.554454    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:40.554461    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:42.556509    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 15
	I0728 19:06:42.556522    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:42.556607    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:42.557527    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:42.557571    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:42.557582    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:42.557597    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:42.557604    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:42.557613    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:42.557621    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:42.557630    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:42.557639    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:42.557646    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:42.557654    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:42.557661    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:42.557669    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:42.557685    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:42.557697    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:42.557711    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:42.557721    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:42.557734    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:42.557744    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:44.559690    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 16
	I0728 19:06:44.559706    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:44.559809    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:44.560580    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:44.560642    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:44.560654    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:44.560662    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:44.560670    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:44.560691    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:44.560700    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:44.560712    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:44.560726    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:44.560740    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:44.560750    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:44.560758    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:44.560766    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:44.560780    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:44.560799    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:44.560808    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:44.560815    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:44.560823    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:44.560831    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:46.562813    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 17
	I0728 19:06:46.562833    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:46.562866    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:46.563705    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:46.563752    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:46.563764    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:46.563774    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:46.563783    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:46.563791    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:46.563797    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:46.563812    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:46.563820    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:46.563827    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:46.563835    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:46.563842    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:46.563848    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:46.563867    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:46.563884    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:46.563894    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:46.563902    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:46.563910    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:46.563917    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:48.565048    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 18
	I0728 19:06:48.565064    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:48.565169    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:48.565943    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:48.565987    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:48.565999    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:48.566016    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:48.566028    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:48.566037    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:48.566044    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:48.566052    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:48.566058    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:48.566066    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:48.566072    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:48.566080    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:48.566089    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:48.566096    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:48.566107    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:48.566125    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:48.566133    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:48.566140    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:48.566147    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:50.566339    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 19
	I0728 19:06:50.566354    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:50.566415    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:50.567443    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:50.567503    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:50.567521    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:50.567540    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:50.567555    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:50.567566    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:50.567574    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:50.567586    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:50.567593    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:50.567608    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:50.567617    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:50.567624    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:50.567633    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:50.567650    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:50.567668    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:50.567682    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:50.567692    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:50.567702    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:50.567709    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:52.567755    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 20
	I0728 19:06:52.567771    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:52.567881    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:52.568786    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:52.568829    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:52.568840    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:52.568855    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:52.568871    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:52.568889    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:52.568903    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:52.568915    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:52.568923    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:52.568930    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:52.568937    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:52.568949    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:52.568958    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:52.568966    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:52.568975    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:52.568993    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:52.569001    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:52.569008    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:52.569017    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:54.568996    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 21
	I0728 19:06:54.569011    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:54.569106    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:54.569880    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:54.569948    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:54.569960    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:54.569982    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:54.570001    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:54.570016    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:54.570026    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:54.570035    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:54.570044    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:54.570052    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:54.570062    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:54.570070    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:54.570081    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:54.570088    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:54.570095    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:54.570118    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:54.570132    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:54.570143    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:54.570151    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:56.572148    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 22
	I0728 19:06:56.572162    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:56.572206    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:56.573159    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:56.573197    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:56.573204    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:56.573221    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:56.573239    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:56.573266    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:56.573277    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:56.573288    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:56.573298    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:56.573308    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:56.573314    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:56.573332    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:56.573343    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:56.573353    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:56.573361    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:56.573366    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:56.573373    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:56.573381    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:56.573389    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:06:58.573450    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 23
	I0728 19:06:58.573462    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:06:58.573576    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:06:58.574352    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:06:58.574391    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:06:58.574402    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:06:58.574410    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:06:58.574416    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:06:58.574425    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:06:58.574434    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:06:58.574456    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:06:58.574469    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:06:58.574487    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:06:58.574499    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:06:58.574508    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:06:58.574515    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:06:58.574521    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:06:58.574537    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:06:58.574552    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:06:58.574572    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:06:58.574586    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:06:58.574598    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:00.576541    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 24
	I0728 19:07:00.576556    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:00.576642    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:07:00.577422    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:07:00.577470    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:00.577480    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:00.577489    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:00.577496    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:00.577504    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:00.577509    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:00.577516    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:00.577524    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:00.577542    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:00.577550    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:00.577557    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:00.577564    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:00.577574    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:00.577581    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:00.577588    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:00.577599    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:00.577609    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:00.577618    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:02.579181    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 25
	I0728 19:07:02.579198    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:02.579276    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:07:02.580092    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:07:02.580131    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:02.580140    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:02.580150    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:02.580157    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:02.580164    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:02.580170    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:02.580177    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:02.580184    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:02.580205    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:02.580215    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:02.580223    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:02.580231    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:02.580239    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:02.580246    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:02.580257    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:02.580271    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:02.580283    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:02.580291    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:04.580342    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 26
	I0728 19:07:04.580359    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:04.580439    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:07:04.581202    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:07:04.581250    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:04.581263    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:04.581273    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:04.581285    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:04.581301    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:04.581310    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:04.581327    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:04.581334    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:04.581341    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:04.581349    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:04.581370    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:04.581381    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:04.581396    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:04.581413    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:04.581427    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:04.581437    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:04.581449    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:04.581455    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:06.583422    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 27
	I0728 19:07:06.583438    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:06.583486    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:07:06.584267    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:07:06.584316    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:06.584327    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:06.584337    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:06.584349    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:06.584357    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:06.584367    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:06.584373    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:06.584381    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:06.584388    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:06.584396    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:06.584404    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:06.584410    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:06.584439    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:06.584452    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:06.584466    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:06.584475    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:06.584489    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:06.584503    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:08.585183    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 28
	I0728 19:07:08.585198    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:08.585285    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:07:08.586066    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:07:08.586114    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:08.586136    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:08.586144    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:08.586166    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:08.586178    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:08.586185    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:08.586207    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:08.586217    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:08.586225    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:08.586233    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:08.586238    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:08.586247    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:08.586259    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:08.586268    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:08.586276    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:08.586283    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:08.586294    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:08.586303    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:10.587127    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Attempt 29
	I0728 19:07:10.587145    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:07:10.587250    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | hyperkit pid from json: 5784
	I0728 19:07:10.588033    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Searching for 3e:41:1a:b9:71:cb in /var/db/dhcpd_leases ...
	I0728 19:07:10.588084    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:07:10.588097    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:07:10.588110    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:07:10.588117    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:07:10.588127    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:07:10.588135    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:07:10.588141    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:07:10.588148    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:07:10.588155    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:07:10.588161    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:07:10.588181    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:07:10.588194    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:07:10.588203    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:07:10.588209    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:07:10.588228    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:07:10.588240    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:07:10.588257    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:07:10.588268    5724 main.go:141] libmachine: (force-systemd-flag-925000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:07:12.590337    5724 client.go:171] duration metric: took 1m0.823001455s to LocalClient.Create
	I0728 19:07:14.591824    5724 start.go:128] duration metric: took 1m2.855821982s to createHost
	I0728 19:07:14.591852    5724 start.go:83] releasing machines lock for "force-systemd-flag-925000", held for 1m2.855946207s
	W0728 19:07:14.591987    5724 out.go:239] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-925000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3e:41:1a:b9:71:cb
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-925000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3e:41:1a:b9:71:cb
	I0728 19:07:14.675895    5724 out.go:177] 
	W0728 19:07:14.697201    5724 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3e:41:1a:b9:71:cb
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3e:41:1a:b9:71:cb
	W0728 19:07:14.697211    5724 out.go:239] * 
	* 
	W0728 19:07:14.697855    5724 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 19:07:14.759111    5724 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-925000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-925000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-925000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (173.986818ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-925000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-925000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-28 19:07:15.040072 -0700 PDT m=+4876.774628421
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-925000 -n force-systemd-flag-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-925000 -n force-systemd-flag-925000: exit status 7 (78.975736ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 19:07:15.117176    5792 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0728 19:07:15.117196    5792 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-925000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-925000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-925000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-925000: (5.258343781s)
--- FAIL: TestForceSystemdFlag (251.75s)

                                                
                                    
x
+
TestForceSystemdEnv (232.66s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-720000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0728 19:00:50.081406    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 19:01:00.998447    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 19:02:13.147850    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-720000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m47.049592068s)

                                                
                                                
-- stdout --
	* [force-systemd-env-720000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-720000" primary control-plane node in "force-systemd-env-720000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-720000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 19:00:19.163810    5677 out.go:291] Setting OutFile to fd 1 ...
	I0728 19:00:19.164000    5677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 19:00:19.164005    5677 out.go:304] Setting ErrFile to fd 2...
	I0728 19:00:19.164009    5677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 19:00:19.164180    5677 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 19:00:19.165702    5677 out.go:298] Setting JSON to false
	I0728 19:00:19.188279    5677 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5390,"bootTime":1722213029,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 19:00:19.188372    5677 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 19:00:19.210749    5677 out.go:177] * [force-systemd-env-720000] minikube v1.33.1 on Darwin 14.5
	I0728 19:00:19.252345    5677 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 19:00:19.252436    5677 notify.go:220] Checking for updates...
	I0728 19:00:19.299024    5677 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 19:00:19.319969    5677 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 19:00:19.341087    5677 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 19:00:19.362083    5677 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 19:00:19.382894    5677 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0728 19:00:19.403481    5677 config.go:182] Loaded profile config "offline-docker-461000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 19:00:19.403568    5677 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 19:00:19.432151    5677 out.go:177] * Using the hyperkit driver based on user configuration
	I0728 19:00:19.473066    5677 start.go:297] selected driver: hyperkit
	I0728 19:00:19.473077    5677 start.go:901] validating driver "hyperkit" against <nil>
	I0728 19:00:19.473087    5677 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 19:00:19.475914    5677 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 19:00:19.476025    5677 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 19:00:19.484205    5677 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 19:00:19.488013    5677 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:00:19.488034    5677 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 19:00:19.488071    5677 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 19:00:19.488273    5677 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0728 19:00:19.488297    5677 cni.go:84] Creating CNI manager for ""
	I0728 19:00:19.488313    5677 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 19:00:19.488319    5677 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 19:00:19.488378    5677 start.go:340] cluster config:
	{Name:force-systemd-env-720000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-720000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 19:00:19.488469    5677 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 19:00:19.529920    5677 out.go:177] * Starting "force-systemd-env-720000" primary control-plane node in "force-systemd-env-720000" cluster
	I0728 19:00:19.551070    5677 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 19:00:19.551095    5677 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 19:00:19.551110    5677 cache.go:56] Caching tarball of preloaded images
	I0728 19:00:19.551203    5677 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 19:00:19.551211    5677 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 19:00:19.551282    5677 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/force-systemd-env-720000/config.json ...
	I0728 19:00:19.551301    5677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/force-systemd-env-720000/config.json: {Name:mk27de26d0da2c9fa9e709f3e24f1eba278fa016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 19:00:19.551651    5677 start.go:360] acquireMachinesLock for force-systemd-env-720000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 19:00:57.363048    5677 start.go:364] duration metric: took 37.811653589s to acquireMachinesLock for "force-systemd-env-720000"
	I0728 19:00:57.363091    5677 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-720000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-720000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 19:00:57.363148    5677 start.go:125] createHost starting for "" (driver="hyperkit")
	I0728 19:00:57.384595    5677 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0728 19:00:57.384738    5677 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:00:57.384776    5677 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:00:57.393201    5677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53474
	I0728 19:00:57.393575    5677 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:00:57.394080    5677 main.go:141] libmachine: Using API Version  1
	I0728 19:00:57.394126    5677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:00:57.394412    5677 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:00:57.394538    5677 main.go:141] libmachine: (force-systemd-env-720000) Calling .GetMachineName
	I0728 19:00:57.394638    5677 main.go:141] libmachine: (force-systemd-env-720000) Calling .DriverName
	I0728 19:00:57.394742    5677 start.go:159] libmachine.API.Create for "force-systemd-env-720000" (driver="hyperkit")
	I0728 19:00:57.394768    5677 client.go:168] LocalClient.Create starting
	I0728 19:00:57.394800    5677 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem
	I0728 19:00:57.394852    5677 main.go:141] libmachine: Decoding PEM data...
	I0728 19:00:57.394872    5677 main.go:141] libmachine: Parsing certificate...
	I0728 19:00:57.394934    5677 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem
	I0728 19:00:57.394971    5677 main.go:141] libmachine: Decoding PEM data...
	I0728 19:00:57.394983    5677 main.go:141] libmachine: Parsing certificate...
	I0728 19:00:57.394994    5677 main.go:141] libmachine: Running pre-create checks...
	I0728 19:00:57.395004    5677 main.go:141] libmachine: (force-systemd-env-720000) Calling .PreCreateCheck
	I0728 19:00:57.395082    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:57.395230    5677 main.go:141] libmachine: (force-systemd-env-720000) Calling .GetConfigRaw
	I0728 19:00:57.405790    5677 main.go:141] libmachine: Creating machine...
	I0728 19:00:57.405799    5677 main.go:141] libmachine: (force-systemd-env-720000) Calling .Create
	I0728 19:00:57.405890    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:57.406032    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | I0728 19:00:57.405883    5692 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 19:00:57.406067    5677 main.go:141] libmachine: (force-systemd-env-720000) Downloading /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1006/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0728 19:00:57.629821    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | I0728 19:00:57.629718    5692 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/id_rsa...
	I0728 19:00:57.745114    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | I0728 19:00:57.745041    5692 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/force-systemd-env-720000.rawdisk...
	I0728 19:00:57.745125    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Writing magic tar header
	I0728 19:00:57.745134    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Writing SSH key tar header
	I0728 19:00:57.745690    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | I0728 19:00:57.745650    5692 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000 ...
	I0728 19:00:58.118288    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:58.118307    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/hyperkit.pid
	I0728 19:00:58.118325    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Using UUID 86bfd861-d810-48e2-9058-fc187f30b3ed
	I0728 19:00:58.144298    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Generated MAC 76:1f:25:88:c0:64
	I0728 19:00:58.144326    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-720000
	I0728 19:00:58.144355    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"86bfd861-d810-48e2-9058-fc187f30b3ed", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0728 19:00:58.144385    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"86bfd861-d810-48e2-9058-fc187f30b3ed", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0728 19:00:58.144426    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "86bfd861-d810-48e2-9058-fc187f30b3ed", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/force-systemd-env-720000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-sys
temd-env-720000/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-720000"}
	I0728 19:00:58.144471    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 86bfd861-d810-48e2-9058-fc187f30b3ed -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/force-systemd-env-720000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/bzimage,/Users/jenkins/minikube-integration/19
312-1006/.minikube/machines/force-systemd-env-720000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-720000"
	I0728 19:00:58.144486    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 19:00:58.147604    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 DEBUG: hyperkit: Pid is 5693
	I0728 19:00:58.149024    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 0
	I0728 19:00:58.149041    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:00:58.149123    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:00:58.149971    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:00:58.150040    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:00:58.150060    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:00:58.150095    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:00:58.150108    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:00:58.150121    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:00:58.150133    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:00:58.150142    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:00:58.150154    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:00:58.150163    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:00:58.150198    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:00:58.150213    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:00:58.150229    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:00:58.150251    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:00:58.150265    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:00:58.150279    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:00:58.150295    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:00:58.150307    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:00:58.150321    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:00:58.155256    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 19:00:58.163604    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 19:00:58.164627    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 19:00:58.164657    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 19:00:58.164675    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 19:00:58.164689    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 19:00:58.542406    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 19:00:58.542427    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 19:00:58.657516    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 19:00:58.657535    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 19:00:58.657550    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 19:00:58.657571    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 19:00:58.658417    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 19:00:58.658427    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:00:58 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 19:01:00.151730    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 1
	I0728 19:01:00.151746    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:00.151777    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:00.152554    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:00.152612    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:00.152624    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:00.152633    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:00.152642    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:00.152648    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:00.152656    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:00.152673    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:00.152679    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:00.152686    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:00.152693    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:00.152712    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:00.152726    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:00.152734    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:00.152741    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:00.152747    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:00.152756    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:00.152771    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:00.152784    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:02.153219    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 2
	I0728 19:01:02.153236    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:02.153326    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:02.154124    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:02.154184    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:02.154199    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:02.154209    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:02.154216    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:02.154233    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:02.154247    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:02.154257    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:02.154266    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:02.154281    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:02.154292    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:02.154301    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:02.154309    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:02.154318    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:02.154326    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:02.154334    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:02.154352    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:02.154363    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:02.154369    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:04.067762    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:01:04 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0728 19:01:04.067882    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:01:04 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0728 19:01:04.067895    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:01:04 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0728 19:01:04.087662    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:01:04 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0728 19:01:04.156210    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 3
	I0728 19:01:04.156279    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:04.156486    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:04.157883    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:04.157997    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:04.158017    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:04.158031    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:04.158042    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:04.158059    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:04.158072    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:04.158095    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:04.158108    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:04.158121    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:04.158153    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:04.158167    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:04.158178    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:04.158191    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:04.158226    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:04.158238    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:04.158248    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:04.158259    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:04.158270    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:06.158185    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 4
	I0728 19:01:06.158200    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:06.158299    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:06.159063    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:06.159123    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:06.159142    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:06.159155    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:06.159182    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:06.159194    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:06.159203    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:06.159222    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:06.159231    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:06.159238    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:06.159246    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:06.159262    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:06.159280    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:06.159290    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:06.159306    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:06.159322    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:06.159334    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:06.159340    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:06.159348    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:08.161396    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 5
	I0728 19:01:08.161408    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:08.161485    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:08.162272    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:08.162314    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:08.162326    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:08.162335    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:08.162345    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:08.162361    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:08.162376    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:08.162384    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:08.162390    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:08.162397    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:08.162406    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:08.162425    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:08.162434    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:08.162443    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:08.162456    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:08.162464    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:08.162472    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:08.162479    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:08.162485    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:10.162578    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 6
	I0728 19:01:10.162594    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:10.162668    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:10.163416    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:10.163465    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:10.163481    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:10.163494    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:10.163513    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:10.163519    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:10.163526    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:10.163533    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:10.163539    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:10.163548    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:10.163558    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:10.163566    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:10.163573    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:10.163580    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:10.163588    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:10.163596    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:10.163602    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:10.163610    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:10.163618    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:12.163633    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 7
	I0728 19:01:12.163647    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:12.163694    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:12.164473    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:12.164492    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:12.164502    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:12.164508    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:12.164514    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:12.164522    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:12.164530    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:12.164535    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:12.164542    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:12.164548    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:12.164555    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:12.164561    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:12.164575    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:12.164587    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:12.164597    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:12.164604    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:12.164611    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:12.164619    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:12.164637    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:14.164609    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 8
	I0728 19:01:14.164622    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:14.164729    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:14.165497    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:14.165549    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:14.165565    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:14.165577    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:14.165585    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:14.165591    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:14.165599    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:14.165619    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:14.165628    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:14.165635    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:14.165642    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:14.165665    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:14.165678    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:14.165695    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:14.165709    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:14.165719    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:14.165731    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:14.165739    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:14.165745    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:16.167695    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 9
	I0728 19:01:16.167710    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:16.167775    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:16.168577    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:16.168636    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:16.168647    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:16.168655    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:16.168665    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:16.168683    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:16.168699    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:16.168707    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:16.168716    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:16.168727    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:16.168735    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:16.168741    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:16.168758    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:16.168771    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:16.168780    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:16.168787    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:16.168794    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:16.168803    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:16.168812    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:18.169120    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 10
	I0728 19:01:18.169135    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:18.169244    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:18.170001    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:18.170055    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:18.170077    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:18.170096    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:18.170102    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:18.170116    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:18.170130    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:18.170144    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:18.170160    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:18.170169    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:18.170177    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:18.170187    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:18.170195    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:18.170203    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:18.170220    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:18.170230    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:18.170237    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:18.170249    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:18.170259    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:20.172246    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 11
	I0728 19:01:20.172269    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:20.172343    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:20.173113    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:20.173150    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:20.173160    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:20.173172    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:20.173182    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:20.173191    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:20.173199    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:20.173207    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:20.173213    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:20.173223    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:20.173231    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:20.173238    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:20.173249    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:20.173255    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:20.173262    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:20.173270    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:20.173277    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:20.173285    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:20.173293    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:22.174450    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 12
	I0728 19:01:22.174462    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:22.174562    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:22.175332    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:22.175377    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:22.175390    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:22.175409    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:22.175420    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:22.175434    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:22.175444    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:22.175451    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:22.175459    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:22.175476    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:22.175485    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:22.175493    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:22.175502    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:22.175518    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:22.175532    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:22.175540    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:22.175546    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:22.175563    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:22.175575    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:24.176381    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 13
	I0728 19:01:24.176396    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:24.176442    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:24.177220    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:24.177268    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:24.177290    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:24.177300    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:24.177328    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:24.177341    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:24.177350    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:24.177358    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:24.177370    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:24.177376    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:24.177384    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:24.177391    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:24.177406    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:24.177419    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:24.177428    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:24.177434    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:24.177443    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:24.177454    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:24.177463    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:26.177909    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 14
	I0728 19:01:26.177925    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:26.177990    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:26.178824    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:26.178887    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:26.178897    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:26.178906    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:26.178916    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:26.178926    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:26.178933    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:26.178947    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:26.178958    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:26.178984    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:26.179013    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:26.179022    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:26.179031    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:26.179037    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:26.179042    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:26.179050    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:26.179057    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:26.179063    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:26.179069    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:28.181170    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 15
	I0728 19:01:28.181197    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:28.181268    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:28.182059    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:28.182100    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:28.182112    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:28.182125    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:28.182132    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:28.182139    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:28.182149    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:28.182157    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:28.182164    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:28.182172    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:28.182179    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:28.182186    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:28.182193    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:28.182199    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:28.182206    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:28.182214    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:28.182221    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:28.182228    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:28.182246    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:30.182591    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 16
	I0728 19:01:30.182606    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:30.182642    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:30.183479    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:30.183512    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:30.183522    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:30.183533    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:30.183546    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:30.183555    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:30.183568    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:30.183580    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:30.183588    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:30.183597    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:30.183613    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:30.183622    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:30.183630    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:30.183639    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:30.183650    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:30.183658    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:30.183665    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:30.183674    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:30.183682    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:32.185677    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 17
	I0728 19:01:32.185691    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:32.185771    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:32.186559    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:32.186598    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:32.186608    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:32.186630    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:32.186636    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:32.186645    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:32.186653    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:32.186659    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:32.186668    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:32.186686    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:32.186699    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:32.186707    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:32.186717    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:32.186724    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:32.186732    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:32.186750    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:32.186760    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:32.186769    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:32.186777    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:34.187337    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 18
	I0728 19:01:34.187348    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:34.187421    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:34.188380    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:34.188403    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:34.188418    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:34.188429    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:34.188436    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:34.188442    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:34.188450    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:34.188459    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:34.188472    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:34.188482    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:34.188500    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:34.188508    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:34.188517    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:34.188525    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:34.188533    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:34.188540    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:34.188548    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:34.188555    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:34.188563    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:36.190612    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 19
	I0728 19:01:36.190626    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:36.190672    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:36.191434    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:36.191480    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:36.191499    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:36.191514    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:36.191523    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:36.191531    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:36.191545    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:36.191567    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:36.191579    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:36.191588    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:36.191595    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:36.191607    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:36.191626    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:36.191634    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:36.191642    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:36.191651    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:36.191657    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:36.191668    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:36.191681    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:38.193627    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 20
	I0728 19:01:38.193644    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:38.193728    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:38.194511    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:38.194534    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:38.194547    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:38.194555    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:38.194562    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:38.194590    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:38.194596    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:38.194605    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:38.194610    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:38.194617    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:38.194622    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:38.194629    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:38.194650    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:38.194661    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:38.194668    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:38.194675    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:38.194681    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:38.194686    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:38.194692    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:40.196687    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 21
	I0728 19:01:40.196711    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:40.196763    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:40.197545    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:40.197588    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:40.197600    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:40.197621    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:40.197637    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:40.197645    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:40.197662    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:40.197674    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:40.197687    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:40.197698    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:40.197705    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:40.197717    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:40.197731    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:40.197739    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:40.197747    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:40.197753    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:40.197759    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:40.197765    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:40.197771    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:42.198811    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 22
	I0728 19:01:42.198827    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:42.198877    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:42.199659    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:42.199686    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:42.199698    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:42.199706    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:42.199715    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:42.199726    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:42.199733    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:42.199741    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:42.199765    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:42.199779    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:42.199799    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:42.199808    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:42.199817    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:42.199826    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:42.199835    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:42.199841    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:42.199848    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:42.199867    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:42.199893    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:44.200165    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 23
	I0728 19:01:44.200179    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:44.200258    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:44.201088    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:44.201141    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:44.201155    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:44.201170    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:44.201180    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:44.201188    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:44.201195    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:44.201202    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:44.201210    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:44.201217    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:44.201231    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:44.201238    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:44.201245    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:44.201252    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:44.201260    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:44.201268    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:44.201275    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:44.201281    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:44.201289    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:46.201298    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 24
	I0728 19:01:46.201311    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:46.201408    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:46.202190    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:46.202216    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:46.202227    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:46.202244    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:46.202254    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:46.202260    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:46.202268    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:46.202275    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:46.202294    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:46.202309    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:46.202319    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:46.202337    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:46.202350    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:46.202366    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:46.202378    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:46.202387    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:46.202396    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:46.202407    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:46.202417    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:48.203046    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 25
	I0728 19:01:48.203062    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:48.203106    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:48.203950    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:48.204004    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:48.204016    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:48.204029    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:48.204043    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:48.204052    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:48.204057    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:48.204066    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:48.204076    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:48.204092    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:48.204105    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:48.204113    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:48.204122    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:48.204129    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:48.204135    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:48.204147    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:48.204160    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:48.204172    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:48.204180    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:50.206170    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 26
	I0728 19:01:50.206186    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:50.206258    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:50.207101    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:50.207145    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:50.207155    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:50.207164    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:50.207171    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:50.207177    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:50.207184    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:50.207192    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:50.207199    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:50.207212    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:50.207222    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:50.207240    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:50.207252    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:50.207264    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:50.207278    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:50.207289    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:50.207297    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:50.207306    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:50.207314    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:52.209331    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 27
	I0728 19:01:52.209345    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:52.209431    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:52.210205    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:52.210266    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:52.210274    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:52.210297    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:52.210309    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:52.210319    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:52.210328    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:52.210335    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:52.210343    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:52.210352    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:52.210362    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:52.210369    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:52.210377    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:52.210383    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:52.210391    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:52.210406    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:52.210417    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:52.210425    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:52.210433    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:54.211613    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 28
	I0728 19:01:54.211628    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:54.211695    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:54.212474    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:54.212487    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:54.212496    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:54.212504    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:54.212512    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:54.212518    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:54.212525    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:54.212531    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:54.212537    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:54.212543    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:54.212551    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:54.212560    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:54.212566    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:54.212573    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:54.212585    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:54.212604    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:54.212616    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:54.212625    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:54.212633    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:56.212780    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 29
	I0728 19:01:56.212795    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:01:56.212863    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:01:56.213645    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 76:1f:25:88:c0:64 in /var/db/dhcpd_leases ...
	I0728 19:01:56.213684    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:01:56.213698    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:01:56.213706    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:01:56.213729    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:01:56.213753    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:01:56.213761    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:01:56.213768    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:01:56.213782    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:01:56.213790    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:01:56.213796    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:01:56.213804    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:01:56.213812    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:01:56.213820    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:01:56.213826    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:01:56.213838    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:01:56.213859    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:01:56.213866    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:01:56.213874    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:01:58.214355    5677 client.go:171] duration metric: took 1m0.820014674s to LocalClient.Create
	I0728 19:02:00.214725    5677 start.go:128] duration metric: took 1m2.85201921s to createHost
	I0728 19:02:00.214738    5677 start.go:83] releasing machines lock for "force-systemd-env-720000", held for 1m2.852133109s
	W0728 19:02:00.214756    5677 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:1f:25:88:c0:64
	I0728 19:02:00.215082    5677 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:02:00.215122    5677 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:02:00.223566    5677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53479
	I0728 19:02:00.223907    5677 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:02:00.224286    5677 main.go:141] libmachine: Using API Version  1
	I0728 19:02:00.224302    5677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:02:00.224526    5677 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:02:00.224899    5677 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:02:00.224920    5677 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:02:00.233223    5677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53481
	I0728 19:02:00.233555    5677 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:02:00.233883    5677 main.go:141] libmachine: Using API Version  1
	I0728 19:02:00.233895    5677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:02:00.234114    5677 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:02:00.234221    5677 main.go:141] libmachine: (force-systemd-env-720000) Calling .GetState
	I0728 19:02:00.234303    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:00.234368    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:02:00.235322    5677 main.go:141] libmachine: (force-systemd-env-720000) Calling .DriverName
	I0728 19:02:00.298033    5677 out.go:177] * Deleting "force-systemd-env-720000" in hyperkit ...
	I0728 19:02:00.318936    5677 main.go:141] libmachine: (force-systemd-env-720000) Calling .Remove
	I0728 19:02:00.319085    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:00.319094    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:00.319180    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:02:00.320080    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:00.320147    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | waiting for graceful shutdown
	I0728 19:02:01.320946    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:01.321131    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:02:01.322013    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | waiting for graceful shutdown
	I0728 19:02:02.324176    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:02.324273    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:02:02.325933    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | waiting for graceful shutdown
	I0728 19:02:03.326285    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:03.326365    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:02:03.326949    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | waiting for graceful shutdown
	I0728 19:02:04.329074    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:04.329166    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:02:04.329742    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | waiting for graceful shutdown
	I0728 19:02:05.329874    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:05.329970    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5693
	I0728 19:02:05.331047    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | sending sigkill
	I0728 19:02:05.331056    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:02:05.341740    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:02:05 WARN : hyperkit: failed to read stderr: EOF
	I0728 19:02:05.341757    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:02:05 WARN : hyperkit: failed to read stdout: EOF
	W0728 19:02:05.356489    5677 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:1f:25:88:c0:64
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:1f:25:88:c0:64
	I0728 19:02:05.356507    5677 start.go:729] Will try again in 5 seconds ...
	I0728 19:02:10.358538    5677 start.go:360] acquireMachinesLock for force-systemd-env-720000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 19:03:03.038008    5677 start.go:364] duration metric: took 52.679826978s to acquireMachinesLock for "force-systemd-env-720000"
	I0728 19:03:03.038046    5677 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-720000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-720000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 19:03:03.038107    5677 start.go:125] createHost starting for "" (driver="hyperkit")
	I0728 19:03:03.058394    5677 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0728 19:03:03.058475    5677 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:03:03.058510    5677 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:03:03.067348    5677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53485
	I0728 19:03:03.067764    5677 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:03:03.068242    5677 main.go:141] libmachine: Using API Version  1
	I0728 19:03:03.068261    5677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:03:03.068560    5677 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:03:03.068678    5677 main.go:141] libmachine: (force-systemd-env-720000) Calling .GetMachineName
	I0728 19:03:03.068779    5677 main.go:141] libmachine: (force-systemd-env-720000) Calling .DriverName
	I0728 19:03:03.068949    5677 start.go:159] libmachine.API.Create for "force-systemd-env-720000" (driver="hyperkit")
	I0728 19:03:03.068965    5677 client.go:168] LocalClient.Create starting
	I0728 19:03:03.068992    5677 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem
	I0728 19:03:03.069048    5677 main.go:141] libmachine: Decoding PEM data...
	I0728 19:03:03.069060    5677 main.go:141] libmachine: Parsing certificate...
	I0728 19:03:03.069105    5677 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem
	I0728 19:03:03.069147    5677 main.go:141] libmachine: Decoding PEM data...
	I0728 19:03:03.069159    5677 main.go:141] libmachine: Parsing certificate...
	I0728 19:03:03.069172    5677 main.go:141] libmachine: Running pre-create checks...
	I0728 19:03:03.069178    5677 main.go:141] libmachine: (force-systemd-env-720000) Calling .PreCreateCheck
	I0728 19:03:03.069263    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:03.069292    5677 main.go:141] libmachine: (force-systemd-env-720000) Calling .GetConfigRaw
	I0728 19:03:03.100657    5677 main.go:141] libmachine: Creating machine...
	I0728 19:03:03.100679    5677 main.go:141] libmachine: (force-systemd-env-720000) Calling .Create
	I0728 19:03:03.100818    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:03.101011    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | I0728 19:03:03.100812    5713 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 19:03:03.101061    5677 main.go:141] libmachine: (force-systemd-env-720000) Downloading /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1006/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0728 19:03:03.426374    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | I0728 19:03:03.426317    5713 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/id_rsa...
	I0728 19:03:03.516531    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | I0728 19:03:03.516482    5713 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/force-systemd-env-720000.rawdisk...
	I0728 19:03:03.516548    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Writing magic tar header
	I0728 19:03:03.516559    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Writing SSH key tar header
	I0728 19:03:03.516874    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | I0728 19:03:03.516841    5713 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000 ...
	I0728 19:03:03.893414    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:03.893436    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/hyperkit.pid
	I0728 19:03:03.893447    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Using UUID 6124f10f-d9ea-4793-9617-f4fb2236e698
	I0728 19:03:03.918776    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Generated MAC 5a:e0:89:ca:77:b7
	I0728 19:03:03.918797    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-720000
	I0728 19:03:03.918834    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:03 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6124f10f-d9ea-4793-9617-f4fb2236e698", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0728 19:03:03.918869    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:03 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6124f10f-d9ea-4793-9617-f4fb2236e698", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0728 19:03:03.918943    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:03 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "6124f10f-d9ea-4793-9617-f4fb2236e698", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/force-systemd-env-720000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-sys
temd-env-720000/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-720000"}
	I0728 19:03:03.918995    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:03 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 6124f10f-d9ea-4793-9617-f4fb2236e698 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/force-systemd-env-720000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/bzimage,/Users/jenkins/minikube-integration/19
312-1006/.minikube/machines/force-systemd-env-720000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-720000"
	I0728 19:03:03.919009    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:03 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 19:03:03.921868    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:03 DEBUG: hyperkit: Pid is 5723
	I0728 19:03:03.922957    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 0
	I0728 19:03:03.922971    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:03.923061    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:03.924092    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:03.924183    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:03.924205    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:03.924244    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:03.924265    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:03.924281    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:03.924304    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:03.924333    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:03.924346    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:03.924359    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:03.924374    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:03.924387    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:03.924402    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:03.924419    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:03.924432    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:03.924444    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:03.924456    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:03.924474    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:03.924492    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:03.929387    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:03 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 19:03:03.938781    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:03 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/force-systemd-env-720000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 19:03:03.939565    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 19:03:03.939590    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 19:03:03.939604    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 19:03:03.939620    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 19:03:04.316734    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 19:03:04.316747    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 19:03:04.431419    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 19:03:04.431441    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 19:03:04.431457    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 19:03:04.431484    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 19:03:04.432311    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 19:03:04.432323    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 19:03:05.926231    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 1
	I0728 19:03:05.926251    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:05.926298    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:05.927108    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:05.927141    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:05.927158    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:05.927181    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:05.927194    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:05.927207    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:05.927217    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:05.927224    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:05.927232    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:05.927241    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:05.927254    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:05.927261    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:05.927269    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:05.927276    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:05.927284    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:05.927293    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:05.927299    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:05.927305    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:05.927313    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:07.927337    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 2
	I0728 19:03:07.927350    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:07.927447    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:07.928228    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:07.928292    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:07.928301    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:07.928334    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:07.928353    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:07.928366    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:07.928372    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:07.928386    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:07.928396    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:07.928403    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:07.928410    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:07.928419    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:07.928429    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:07.928436    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:07.928447    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:07.928463    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:07.928475    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:07.928490    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:07.928501    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:09.831760    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:09 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0728 19:03:09.831945    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:09 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0728 19:03:09.831957    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:09 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0728 19:03:09.852614    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | 2024/07/28 19:03:09 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0728 19:03:09.929760    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 3
	I0728 19:03:09.929791    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:09.929951    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:09.931372    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:09.931503    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:09.931522    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:09.931539    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:09.931550    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:09.931615    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:09.931637    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:09.931676    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:09.931714    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:09.931726    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:09.931738    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:09.931758    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:09.931768    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:09.931778    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:09.931786    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:09.931799    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:09.931813    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:09.931823    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:09.931834    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:11.931680    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 4
	I0728 19:03:11.931697    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:11.931817    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:11.932574    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:11.932622    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:11.932630    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:11.932649    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:11.932666    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:11.932674    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:11.932680    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:11.932694    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:11.932712    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:11.932722    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:11.932729    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:11.932739    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:11.932748    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:11.932756    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:11.932764    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:11.932773    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:11.932781    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:11.932788    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:11.932799    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:13.934159    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 5
	I0728 19:03:13.934184    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:13.934287    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:13.935094    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:13.935141    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:13.935156    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:13.935162    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:13.935169    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:13.935175    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:13.935190    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:13.935197    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:13.935204    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:13.935210    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:13.935216    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:13.935224    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:13.935245    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:13.935259    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:13.935269    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:13.935277    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:13.935284    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:13.935292    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:13.935300    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:15.936739    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 6
	I0728 19:03:15.936756    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:15.936832    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:15.937604    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:15.937631    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:15.937641    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:15.937653    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:15.937661    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:15.937682    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:15.937692    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:15.937699    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:15.937709    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:15.937716    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:15.937723    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:15.937729    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:15.937738    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:15.937747    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:15.937765    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:15.937777    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:15.937784    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:15.937790    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:15.937798    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:17.938295    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 7
	I0728 19:03:17.938310    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:17.938361    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:17.939117    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:17.939169    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:17.939178    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:17.939186    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:17.939193    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:17.939200    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:17.939214    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:17.939221    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:17.939243    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:17.939254    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:17.939263    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:17.939278    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:17.939287    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:17.939295    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:17.939303    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:17.939322    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:17.939337    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:17.939345    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:17.939354    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:19.939820    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 8
	I0728 19:03:19.939833    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:19.939887    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:19.940824    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:19.940877    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:19.940890    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:19.940897    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:19.940904    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:19.940912    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:19.940929    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:19.940941    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:19.940951    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:19.940960    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:19.940968    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:19.940975    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:19.940981    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:19.940989    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:19.940996    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:19.941008    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:19.941023    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:19.941039    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:19.941054    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:21.943027    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 9
	I0728 19:03:21.943042    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:21.943170    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:21.944127    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:21.944154    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:21.944161    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:21.944169    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:21.944176    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:21.944199    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:21.944209    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:21.944217    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:21.944226    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:21.944244    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:21.944255    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:21.944263    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:21.944272    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:21.944285    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:21.944294    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:21.944300    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:21.944309    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:21.944319    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:21.944331    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:23.946321    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 10
	I0728 19:03:23.946337    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:23.946393    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:23.947218    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:23.947259    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:23.947272    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:23.947285    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:23.947295    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:23.947303    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:23.947309    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:23.947326    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:23.947338    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:23.947346    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:23.947352    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:23.947363    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:23.947373    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:23.947383    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:23.947391    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:23.947398    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:23.947407    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:23.947413    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:23.947421    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:25.948600    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 11
	I0728 19:03:25.948615    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:25.948737    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:25.949487    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:25.949536    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:25.949549    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:25.949561    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:25.949587    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:25.949599    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:25.949606    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:25.949613    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:25.949621    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:25.949629    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:25.949650    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:25.949667    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:25.949679    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:25.949689    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:25.949698    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:25.949711    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:25.949719    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:25.949727    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:25.949735    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:27.950214    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 12
	I0728 19:03:27.950229    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:27.950313    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:27.951093    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:27.951134    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:27.951142    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:27.951161    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:27.951170    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:27.951186    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:27.951197    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:27.951208    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:27.951215    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:27.951222    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:27.951230    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:27.951241    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:27.951253    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:27.951260    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:27.951267    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:27.951280    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:27.951293    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:27.951309    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:27.951318    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:29.952473    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 13
	I0728 19:03:29.952488    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:29.952608    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:29.953410    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:29.953455    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:29.953466    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:29.953476    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:29.953484    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:29.953492    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:29.953498    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:29.953505    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:29.953512    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:29.953531    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:29.953538    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:29.953546    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:29.953553    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:29.953569    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:29.953579    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:29.953587    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:29.953599    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:29.953607    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:29.953623    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:31.953941    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 14
	I0728 19:03:31.953956    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:31.954010    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:31.954796    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:31.954824    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:31.954835    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:31.954852    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:31.954859    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:31.954874    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:31.954885    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:31.954892    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:31.954899    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:31.954914    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:31.954925    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:31.954932    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:31.954941    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:31.954953    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:31.954960    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:31.954967    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:31.954975    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:31.954988    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:31.954996    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:33.955065    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 15
	I0728 19:03:33.955080    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:33.955175    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:33.955937    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:33.955988    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:33.956001    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:33.956017    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:33.956026    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:33.956033    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:33.956041    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:33.956048    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:33.956054    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:33.956062    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:33.956070    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:33.956078    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:33.956085    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:33.956093    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:33.956102    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:33.956110    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:33.956117    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:33.956126    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:33.956140    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:35.957130    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 16
	I0728 19:03:35.957150    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:35.957283    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:35.958091    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:35.958126    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:35.958148    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:35.958158    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:35.958166    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:35.958175    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:35.958198    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:35.958209    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:35.958216    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:35.958224    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:35.958238    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:35.958252    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:35.958260    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:35.958269    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:35.958275    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:35.958282    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:35.958295    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:35.958308    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:35.958325    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:37.959217    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 17
	I0728 19:03:37.959232    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:37.959329    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:37.960056    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:37.960104    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:37.960115    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:37.960130    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:37.960141    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:37.960148    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:37.960155    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:37.960170    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:37.960185    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:37.960193    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:37.960201    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:37.960212    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:37.960220    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:37.960230    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:37.960239    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:37.960246    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:37.960254    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:37.960263    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:37.960276    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:39.960707    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 18
	I0728 19:03:39.960724    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:39.960794    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:39.961558    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:39.961608    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:39.961618    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:39.961630    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:39.961639    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:39.961646    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:39.961653    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:39.961663    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:39.961673    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:39.961681    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:39.961690    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:39.961697    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:39.961703    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:39.961709    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:39.961717    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:39.961722    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:39.961730    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:39.961739    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:39.961747    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:41.961758    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 19
	I0728 19:03:41.961774    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:41.961904    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:41.962644    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:41.962700    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:41.962717    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:41.962736    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:41.962743    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:41.962758    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:41.962779    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:41.962788    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:41.962795    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:41.962803    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:41.962815    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:41.962825    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:41.962834    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:41.962842    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:41.962870    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:41.962883    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:41.962894    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:41.962902    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:41.962911    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:43.963041    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 20
	I0728 19:03:43.963053    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:43.963181    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:43.963976    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:43.964035    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:43.964049    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:43.964056    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:43.964063    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:43.964071    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:43.964081    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:43.964089    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:43.964108    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:43.964121    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:43.964132    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:43.964141    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:43.964148    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:43.964157    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:43.964164    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:43.964172    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:43.964181    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:43.964189    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:43.964198    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:45.964698    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 21
	I0728 19:03:45.964711    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:45.964762    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:45.965525    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:45.965581    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:45.965590    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:45.965621    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:45.965634    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:45.965643    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:45.965652    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:45.965672    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:45.965689    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:45.965702    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:45.965710    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:45.965719    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:45.965730    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:45.965740    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:45.965749    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:45.965757    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:45.965776    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:45.965784    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:45.965793    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:47.967710    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 22
	I0728 19:03:47.967723    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:47.967769    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:47.968597    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:47.968635    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:47.968648    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:47.968659    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:47.968665    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:47.968672    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:47.968692    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:47.968698    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:47.968705    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:47.968713    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:47.968736    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:47.968749    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:47.968756    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:47.968764    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:47.968771    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:47.968777    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:47.968783    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:47.968790    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:47.968798    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:49.970818    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 23
	I0728 19:03:49.970835    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:49.970863    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:49.971638    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:49.971691    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:49.971701    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:49.971709    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:49.971722    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:49.971740    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:49.971752    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:49.971777    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:49.971789    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:49.971796    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:49.971814    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:49.971821    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:49.971827    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:49.971836    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:49.971844    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:49.971852    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:49.971860    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:49.971867    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:49.971875    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:51.972762    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 24
	I0728 19:03:51.972778    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:51.972828    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:51.973920    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:51.973963    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:51.973975    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:51.973985    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:51.973994    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:51.974018    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:51.974033    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:51.974041    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:51.974050    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:51.974063    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:51.974071    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:51.974078    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:51.974086    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:51.974093    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:51.974101    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:51.974110    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:51.974116    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:51.974130    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:51.974142    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:53.974792    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 25
	I0728 19:03:53.974804    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:53.974878    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:53.975788    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:53.975826    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:53.975840    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:53.975848    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:53.975858    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:53.975871    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:53.975880    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:53.975889    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:53.975896    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:53.975904    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:53.975911    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:53.975919    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:53.975935    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:53.975953    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:53.975969    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:53.975981    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:53.975989    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:53.975997    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:53.976021    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:55.977214    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 26
	I0728 19:03:55.977230    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:55.977284    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:55.978047    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:55.978094    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:55.978105    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:55.978116    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:55.978123    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:55.978130    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:55.978137    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:55.978147    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:55.978154    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:55.978162    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:55.978171    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:55.978178    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:55.978188    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:55.978196    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:55.978203    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:55.978209    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:55.978216    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:55.978222    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:55.978230    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:57.979197    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 27
	I0728 19:03:57.979209    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:57.979271    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:57.980104    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:57.980113    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:57.980121    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:57.980128    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:57.980135    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:57.980141    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:57.980149    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:57.980154    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:57.980161    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:57.980173    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:57.980184    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:57.980190    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:57.980197    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:57.980206    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:57.980212    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:57.980220    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:57.980230    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:57.980239    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:57.980247    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:03:59.982331    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 28
	I0728 19:03:59.982346    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:03:59.982427    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:03:59.983190    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:03:59.983245    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:03:59.983256    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:03:59.983271    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:03:59.983281    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:03:59.983303    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:03:59.983314    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:03:59.983329    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:03:59.983342    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:03:59.983350    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:03:59.983359    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:03:59.983366    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:03:59.983372    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:03:59.983378    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:03:59.983385    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:03:59.983399    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:03:59.983409    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:03:59.983417    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:03:59.983439    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:01.985434    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Attempt 29
	I0728 19:04:01.985453    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:04:01.985573    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | hyperkit pid from json: 5723
	I0728 19:04:01.986357    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Searching for 5a:e0:89:ca:77:b7 in /var/db/dhcpd_leases ...
	I0728 19:04:01.986388    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0728 19:04:01.986412    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:c3:6d:9a:fd:31 ID:1,1e:c3:6d:9a:fd:31 Lease:0x66a848b6}
	I0728 19:04:01.986423    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a847d2}
	I0728 19:04:01.986430    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a6:b9:f4:c1:85:f0 ID:1,a6:b9:f4:c1:85:f0 Lease:0x66a6f5b2}
	I0728 19:04:01.986437    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a8462d}
	I0728 19:04:01.986450    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a846e8}
	I0728 19:04:01.986462    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a846ac}
	I0728 19:04:01.986474    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 19:04:01.986483    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 19:04:01.986490    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 19:04:01.986498    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 19:04:01.986507    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 19:04:01.986515    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 19:04:01.986525    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 19:04:01.986534    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 19:04:01.986541    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 19:04:01.986546    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 19:04:01.986553    5677 main.go:141] libmachine: (force-systemd-env-720000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 19:04:03.988666    5677 client.go:171] duration metric: took 1m0.920133611s to LocalClient.Create
	I0728 19:04:05.988848    5677 start.go:128] duration metric: took 1m2.951185558s to createHost
	I0728 19:04:05.988861    5677 start.go:83] releasing machines lock for "force-systemd-env-720000", held for 1m2.951288429s
	W0728 19:04:05.988954    5677 out.go:239] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-720000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5a:e0:89:ca:77:b7
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-720000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5a:e0:89:ca:77:b7
	I0728 19:04:06.031337    5677 out.go:177] 
	W0728 19:04:06.052208    5677 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5a:e0:89:ca:77:b7
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5a:e0:89:ca:77:b7
	W0728 19:04:06.052223    5677 out.go:239] * 
	* 
	W0728 19:04:06.052896    5677 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 19:04:06.115198    5677 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-720000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-720000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-720000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (179.748697ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-720000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-720000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-28 19:04:06.403468 -0700 PDT m=+4688.136659572
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-720000 -n force-systemd-env-720000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-720000 -n force-systemd-env-720000: exit status 7 (89.225821ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 19:04:06.490515    5740 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0728 19:04:06.490537    5740 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-720000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-720000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-720000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-720000: (5.275747024s)
--- FAIL: TestForceSystemdEnv (232.66s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (194.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-596000 --alsologtostderr -v=8
E0728 17:58:33.865170    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-596000 --alsologtostderr -v=8: exit status 90 (1m13.436949123s)

                                                
                                                
-- stdout --
	* [functional-596000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "functional-596000" primary control-plane node in "functional-596000" cluster
	* Updating the running hyperkit "functional-596000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 17:58:03.181908    2067 out.go:291] Setting OutFile to fd 1 ...
	I0728 17:58:03.182088    2067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:58:03.182094    2067 out.go:304] Setting ErrFile to fd 2...
	I0728 17:58:03.182098    2067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:58:03.182279    2067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 17:58:03.183681    2067 out.go:298] Setting JSON to false
	I0728 17:58:03.206318    2067 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1654,"bootTime":1722213029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 17:58:03.206416    2067 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 17:58:03.227676    2067 out.go:177] * [functional-596000] minikube v1.33.1 on Darwin 14.5
	I0728 17:58:03.269722    2067 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 17:58:03.269783    2067 notify.go:220] Checking for updates...
	I0728 17:58:03.312443    2067 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 17:58:03.333527    2067 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 17:58:03.354627    2067 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 17:58:03.375824    2067 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 17:58:03.396566    2067 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 17:58:03.417974    2067 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 17:58:03.418146    2067 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 17:58:03.418798    2067 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:58:03.418872    2067 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 17:58:03.428211    2067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50175
	I0728 17:58:03.428568    2067 main.go:141] libmachine: () Calling .GetVersion
	I0728 17:58:03.428964    2067 main.go:141] libmachine: Using API Version  1
	I0728 17:58:03.428979    2067 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 17:58:03.429182    2067 main.go:141] libmachine: () Calling .GetMachineName
	I0728 17:58:03.429300    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:03.457784    2067 out.go:177] * Using the hyperkit driver based on existing profile
	I0728 17:58:03.499269    2067 start.go:297] selected driver: hyperkit
	I0728 17:58:03.499285    2067 start.go:901] validating driver "hyperkit" against &{Name:functional-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:58:03.499388    2067 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 17:58:03.499488    2067 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:58:03.499604    2067 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 17:58:03.508339    2067 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 17:58:03.512503    2067 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:58:03.512529    2067 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 17:58:03.515340    2067 cni.go:84] Creating CNI manager for ""
	I0728 17:58:03.515390    2067 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 17:58:03.515469    2067 start.go:340] cluster config:
	{Name:functional-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-596000 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:58:03.515565    2067 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:58:03.573559    2067 out.go:177] * Starting "functional-596000" primary control-plane node in "functional-596000" cluster
	I0728 17:58:03.610472    2067 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 17:58:03.610521    2067 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 17:58:03.610545    2067 cache.go:56] Caching tarball of preloaded images
	I0728 17:58:03.610741    2067 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 17:58:03.610759    2067 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 17:58:03.610882    2067 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/config.json ...
	I0728 17:58:03.611579    2067 start.go:360] acquireMachinesLock for functional-596000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 17:58:03.611656    2067 start.go:364] duration metric: took 61.959µs to acquireMachinesLock for "functional-596000"
	I0728 17:58:03.611681    2067 start.go:96] Skipping create...Using existing machine configuration
	I0728 17:58:03.611696    2067 fix.go:54] fixHost starting: 
	I0728 17:58:03.612004    2067 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:58:03.612033    2067 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 17:58:03.621321    2067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50177
	I0728 17:58:03.621639    2067 main.go:141] libmachine: () Calling .GetVersion
	I0728 17:58:03.622002    2067 main.go:141] libmachine: Using API Version  1
	I0728 17:58:03.622022    2067 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 17:58:03.622230    2067 main.go:141] libmachine: () Calling .GetMachineName
	I0728 17:58:03.622342    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:03.622436    2067 main.go:141] libmachine: (functional-596000) Calling .GetState
	I0728 17:58:03.622567    2067 main.go:141] libmachine: (functional-596000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 17:58:03.622651    2067 main.go:141] libmachine: (functional-596000) DBG | hyperkit pid from json: 2051
	I0728 17:58:03.623593    2067 fix.go:112] recreateIfNeeded on functional-596000: state=Running err=<nil>
	W0728 17:58:03.623608    2067 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 17:58:03.644584    2067 out.go:177] * Updating the running hyperkit "functional-596000" VM ...
	I0728 17:58:03.686410    2067 machine.go:94] provisionDockerMachine start ...
	I0728 17:58:03.686443    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:03.686748    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.686992    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.687220    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.687442    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.687672    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.687922    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:03.688298    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:03.688318    2067 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 17:58:03.737887    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-596000
	
	I0728 17:58:03.737901    2067 main.go:141] libmachine: (functional-596000) Calling .GetMachineName
	I0728 17:58:03.738050    2067 buildroot.go:166] provisioning hostname "functional-596000"
	I0728 17:58:03.738062    2067 main.go:141] libmachine: (functional-596000) Calling .GetMachineName
	I0728 17:58:03.738158    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.738247    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.738335    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.738433    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.738522    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.738660    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:03.738789    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:03.738804    2067 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-596000 && echo "functional-596000" | sudo tee /etc/hostname
	I0728 17:58:03.799001    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-596000
	
	I0728 17:58:03.799026    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.799176    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.799262    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.799342    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.799457    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.799594    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:03.799743    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:03.799755    2067 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-596000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-596000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-596000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 17:58:03.848940    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 17:58:03.848963    2067 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 17:58:03.848979    2067 buildroot.go:174] setting up certificates
	I0728 17:58:03.848994    2067 provision.go:84] configureAuth start
	I0728 17:58:03.849001    2067 main.go:141] libmachine: (functional-596000) Calling .GetMachineName
	I0728 17:58:03.849120    2067 main.go:141] libmachine: (functional-596000) Calling .GetIP
	I0728 17:58:03.849210    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.849295    2067 provision.go:143] copyHostCerts
	I0728 17:58:03.849323    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 17:58:03.849389    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 17:58:03.849397    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 17:58:03.849587    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 17:58:03.849823    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 17:58:03.849865    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 17:58:03.849873    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 17:58:03.850017    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 17:58:03.850186    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 17:58:03.850225    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 17:58:03.850230    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 17:58:03.850308    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 17:58:03.850449    2067 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.functional-596000 san=[127.0.0.1 192.169.0.4 functional-596000 localhost minikube]
	I0728 17:58:03.967853    2067 provision.go:177] copyRemoteCerts
	I0728 17:58:03.967921    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 17:58:03.967939    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.968094    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.968192    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.968299    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.968393    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.001708    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 17:58:04.001790    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 17:58:04.022827    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 17:58:04.022891    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0728 17:58:04.042748    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 17:58:04.042810    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 17:58:04.062503    2067 provision.go:87] duration metric: took 213.493856ms to configureAuth
	I0728 17:58:04.062518    2067 buildroot.go:189] setting minikube options for container-runtime
	I0728 17:58:04.062657    2067 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 17:58:04.062674    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.062814    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.062907    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.062999    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.063076    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.063159    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.063261    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.063390    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.063398    2067 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 17:58:04.115857    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 17:58:04.115869    2067 buildroot.go:70] root file system type: tmpfs
	I0728 17:58:04.115942    2067 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 17:58:04.115956    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.116086    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.116177    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.116266    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.116359    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.116490    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.116628    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.116676    2067 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 17:58:04.180807    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 17:58:04.180831    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.180961    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.181052    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.181141    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.181233    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.181369    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.181514    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.181526    2067 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 17:58:04.236936    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 17:58:04.236950    2067 machine.go:97] duration metric: took 550.516869ms to provisionDockerMachine
	I0728 17:58:04.236962    2067 start.go:293] postStartSetup for "functional-596000" (driver="hyperkit")
	I0728 17:58:04.236969    2067 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 17:58:04.236980    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.237151    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 17:58:04.237167    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.237259    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.237356    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.237450    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.237524    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.269248    2067 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 17:58:04.272370    2067 command_runner.go:130] > NAME=Buildroot
	I0728 17:58:04.272378    2067 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0728 17:58:04.272381    2067 command_runner.go:130] > ID=buildroot
	I0728 17:58:04.272385    2067 command_runner.go:130] > VERSION_ID=2023.02.9
	I0728 17:58:04.272389    2067 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0728 17:58:04.272475    2067 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 17:58:04.272491    2067 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 17:58:04.272591    2067 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 17:58:04.272782    2067 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 17:58:04.272789    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /etc/ssl/certs/15332.pem
	I0728 17:58:04.272981    2067 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/test/nested/copy/1533/hosts -> hosts in /etc/test/nested/copy/1533
	I0728 17:58:04.272987    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/test/nested/copy/1533/hosts -> /etc/test/nested/copy/1533/hosts
	I0728 17:58:04.273049    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1533
	I0728 17:58:04.281301    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 17:58:04.301144    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/test/nested/copy/1533/hosts --> /etc/test/nested/copy/1533/hosts (40 bytes)
	I0728 17:58:04.321194    2067 start.go:296] duration metric: took 84.223294ms for postStartSetup
	I0728 17:58:04.321219    2067 fix.go:56] duration metric: took 709.52621ms for fixHost
	I0728 17:58:04.321235    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.321378    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.321458    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.321552    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.321634    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.321767    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.321915    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.321922    2067 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0728 17:58:04.372672    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722214684.480661733
	
	I0728 17:58:04.372686    2067 fix.go:216] guest clock: 1722214684.480661733
	I0728 17:58:04.372691    2067 fix.go:229] Guest: 2024-07-28 17:58:04.480661733 -0700 PDT Remote: 2024-07-28 17:58:04.321226 -0700 PDT m=+1.173910037 (delta=159.435733ms)
	I0728 17:58:04.372708    2067 fix.go:200] guest clock delta is within tolerance: 159.435733ms
	I0728 17:58:04.372712    2067 start.go:83] releasing machines lock for "functional-596000", held for 761.044153ms
	I0728 17:58:04.372731    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.372854    2067 main.go:141] libmachine: (functional-596000) Calling .GetIP
	I0728 17:58:04.372965    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.373253    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.373372    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.373450    2067 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 17:58:04.373485    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.373513    2067 ssh_runner.go:195] Run: cat /version.json
	I0728 17:58:04.373523    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.373581    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.373615    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.373688    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.373706    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.373784    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.373796    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.373868    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.373891    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.444486    2067 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0728 17:58:04.445070    2067 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0728 17:58:04.445228    2067 ssh_runner.go:195] Run: systemctl --version
	I0728 17:58:04.449759    2067 command_runner.go:130] > systemd 252 (252)
	I0728 17:58:04.449776    2067 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0728 17:58:04.450022    2067 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0728 17:58:04.454258    2067 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0728 17:58:04.454279    2067 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 17:58:04.454319    2067 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 17:58:04.462388    2067 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0728 17:58:04.462398    2067 start.go:495] detecting cgroup driver to use...
	I0728 17:58:04.462514    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 17:58:04.477917    2067 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0728 17:58:04.478151    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 17:58:04.487863    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 17:58:04.497357    2067 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 17:58:04.497404    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 17:58:04.507132    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 17:58:04.516475    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 17:58:04.526165    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 17:58:04.535504    2067 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 17:58:04.545511    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 17:58:04.554731    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 17:58:04.563973    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 17:58:04.573675    2067 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 17:58:04.582020    2067 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0728 17:58:04.582227    2067 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 17:58:04.591135    2067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 17:58:04.729887    2067 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 17:58:04.749030    2067 start.go:495] detecting cgroup driver to use...
	I0728 17:58:04.749107    2067 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 17:58:04.763070    2067 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0728 17:58:04.763645    2067 command_runner.go:130] > [Unit]
	I0728 17:58:04.763655    2067 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 17:58:04.763659    2067 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 17:58:04.763664    2067 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0728 17:58:04.763668    2067 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0728 17:58:04.763673    2067 command_runner.go:130] > StartLimitBurst=3
	I0728 17:58:04.763676    2067 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 17:58:04.763680    2067 command_runner.go:130] > [Service]
	I0728 17:58:04.763686    2067 command_runner.go:130] > Type=notify
	I0728 17:58:04.763691    2067 command_runner.go:130] > Restart=on-failure
	I0728 17:58:04.763696    2067 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 17:58:04.763711    2067 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 17:58:04.763718    2067 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 17:58:04.763723    2067 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 17:58:04.763729    2067 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 17:58:04.763734    2067 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 17:58:04.763741    2067 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 17:58:04.763754    2067 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 17:58:04.763760    2067 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 17:58:04.763763    2067 command_runner.go:130] > ExecStart=
	I0728 17:58:04.763777    2067 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0728 17:58:04.763782    2067 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 17:58:04.763788    2067 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 17:58:04.763795    2067 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 17:58:04.763798    2067 command_runner.go:130] > LimitNOFILE=infinity
	I0728 17:58:04.763802    2067 command_runner.go:130] > LimitNPROC=infinity
	I0728 17:58:04.763807    2067 command_runner.go:130] > LimitCORE=infinity
	I0728 17:58:04.763811    2067 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 17:58:04.763815    2067 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 17:58:04.763824    2067 command_runner.go:130] > TasksMax=infinity
	I0728 17:58:04.763828    2067 command_runner.go:130] > TimeoutStartSec=0
	I0728 17:58:04.763833    2067 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 17:58:04.763837    2067 command_runner.go:130] > Delegate=yes
	I0728 17:58:04.763842    2067 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 17:58:04.763846    2067 command_runner.go:130] > KillMode=process
	I0728 17:58:04.763849    2067 command_runner.go:130] > [Install]
	I0728 17:58:04.763857    2067 command_runner.go:130] > WantedBy=multi-user.target
	I0728 17:58:04.763963    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 17:58:04.775171    2067 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 17:58:04.803670    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 17:58:04.815918    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 17:58:04.827728    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 17:58:04.842925    2067 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 17:58:04.843170    2067 ssh_runner.go:195] Run: which cri-dockerd
	I0728 17:58:04.846059    2067 command_runner.go:130] > /usr/bin/cri-dockerd
	I0728 17:58:04.846245    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 17:58:04.854364    2067 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 17:58:04.868292    2067 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 17:58:05.006256    2067 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 17:58:05.135902    2067 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 17:58:05.135971    2067 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 17:58:05.150351    2067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 17:58:05.274841    2067 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 17:59:16.388765    2067 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0728 17:59:16.388780    2067 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0728 17:59:16.388791    2067 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.113588859s)
	I0728 17:59:16.388851    2067 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0728 17:59:16.398150    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.398166    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797365474Z" level=info msg="Starting up"
	I0728 17:59:16.398196    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797812498Z" level=info msg="containerd not running, starting managed containerd"
	I0728 17:59:16.398214    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.799746278Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	I0728 17:59:16.398223    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.817209839Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 17:59:16.398235    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833006693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 17:59:16.398246    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833027623Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 17:59:16.398255    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833063048Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 17:59:16.398264    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833073437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398274    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833127019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398283    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833187696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398302    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833331655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398312    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833366436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398323    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833378117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398332    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833385070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398342    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833441900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398350    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833582244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398364    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835042594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398374    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835101927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398432    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835241609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398446    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835284736Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 17:59:16.398456    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835372957Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 17:59:16.398464    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835438009Z" level=info msg="metadata content store policy set" policy=shared
	I0728 17:59:16.398472    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837622113Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 17:59:16.398481    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837721038Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 17:59:16.398490    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837768434Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 17:59:16.398500    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837808041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 17:59:16.398509    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837840429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 17:59:16.398518    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837936427Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 17:59:16.398527    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838141537Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.398536    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838308394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.398544    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838347183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 17:59:16.398554    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838384605Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 17:59:16.398566    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838419232Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398576    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838451200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398585    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838482769Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398594    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838513376Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398604    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838546249Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398614    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838577148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398624    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838606171Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398900    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838634886Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398913    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838675799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398921    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838712449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398929    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838744137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398938    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838773905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398946    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838803063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398955    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838838392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398963    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838872381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398971    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838902742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398980    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838935507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398994    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838966734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399003    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838994870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399011    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839022479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399019    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839050538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399028    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839129561Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 17:59:16.399037    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839170342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399045    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839201357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399054    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839229605Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 17:59:16.399063    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839300959Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 17:59:16.399075    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839344419Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 17:59:16.399084    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839377180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 17:59:16.399288    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839407452Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 17:59:16.399301    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839436175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399321    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839464659Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 17:59:16.399330    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839492819Z" level=info msg="NRI interface is disabled by configuration."
	I0728 17:59:16.399339    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839668472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 17:59:16.399347    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839754400Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 17:59:16.399355    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839823157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 17:59:16.399363    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839861606Z" level=info msg="containerd successfully booted in 0.023368s"
	I0728 17:59:16.399371    2067 command_runner.go:130] > Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.840311727Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 17:59:16.399378    2067 command_runner.go:130] > Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.846796524Z" level=info msg="Loading containers: start."
	I0728 17:59:16.399399    2067 command_runner.go:130] > Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.931863378Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 17:59:16.399408    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.016652031Z" level=info msg="Loading containers: done."
	I0728 17:59:16.399429    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023601347Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 17:59:16.399457    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023702083Z" level=info msg="Daemon has completed initialization"
	I0728 17:59:16.399464    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056431503Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 17:59:16.399492    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 systemd[1]: Started Docker Application Container Engine.
	I0728 17:59:16.399501    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056529625Z" level=info msg="API listen on [::]:2376"
	I0728 17:59:16.399507    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.221309736Z" level=info msg="Processing signal 'terminated'"
	I0728 17:59:16.399513    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	I0728 17:59:16.399522    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222558264Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 17:59:16.399528    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222867738Z" level=info msg="Daemon shutdown complete"
	I0728 17:59:16.399545    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222936309Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 17:59:16.399553    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222951150Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 17:59:16.399559    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	I0728 17:59:16.399564    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	I0728 17:59:16.399574    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.399581    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251533872Z" level=info msg="Starting up"
	I0728 17:59:16.399696    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251992238Z" level=info msg="containerd not running, starting managed containerd"
	I0728 17:59:16.399709    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.252592079Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=921
	I0728 17:59:16.399718    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.268000022Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 17:59:16.399726    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283126898Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 17:59:16.399735    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283245051Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 17:59:16.399744    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283296543Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 17:59:16.399753    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283329167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399767    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283372267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399777    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283410007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399792    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283528327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399801    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283565809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399812    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283595793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399821    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283624050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399831    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283661411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399840    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283760929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399853    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285373046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399863    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285426942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399876    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285565612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399910    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285609205Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 17:59:16.399925    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285647249Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 17:59:16.399934    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285681508Z" level=info msg="metadata content store policy set" policy=shared
	I0728 17:59:16.399943    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285827566Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 17:59:16.399952    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285877187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 17:59:16.399961    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285910515Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 17:59:16.399969    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285942139Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 17:59:16.399980    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285973140Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 17:59:16.399991    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286024088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 17:59:16.400000    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286256555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.400009    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286331375Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.400021    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286365544Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 17:59:16.400031    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286394955Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 17:59:16.400040    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286424527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400050    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286453657Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400059    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286484741Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400068    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286516234Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400077    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286546601Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400086    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286579857Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400096    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286611348Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400105    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286641030Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400173    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286674739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400185    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286706453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400194    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286744971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400203    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286779178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400216    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286808354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400225    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286841128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400234    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286870616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400243    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286899451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400251    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286928600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400260    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286965950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400269    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286999059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400278    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287027761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400286    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287057255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400295    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287089564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 17:59:16.400304    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287124670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400312    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287221056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400321    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287260008Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 17:59:16.400332    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287333254Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 17:59:16.400344    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287377987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 17:59:16.400354    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287446465Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 17:59:16.400365    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287477602Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 17:59:16.400375    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287506315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400543    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287535151Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 17:59:16.400553    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287565710Z" level=info msg="NRI interface is disabled by configuration."
	I0728 17:59:16.400561    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287745237Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 17:59:16.400572    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287832539Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 17:59:16.400580    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287924952Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 17:59:16.400588    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287968311Z" level=info msg="containerd successfully booted in 0.020373s"
	I0728 17:59:16.400596    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.331881234Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 17:59:16.400604    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.335683791Z" level=info msg="Loading containers: start."
	I0728 17:59:16.400623    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.404366470Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 17:59:16.400634    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.461547560Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0728 17:59:16.400642    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.503511121Z" level=info msg="Loading containers: done."
	I0728 17:59:16.400652    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521014736Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 17:59:16.400659    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521083688Z" level=info msg="Daemon has completed initialization"
	I0728 17:59:16.400669    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.540963112Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 17:59:16.400676    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 systemd[1]: Started Docker Application Container Engine.
	I0728 17:59:16.400683    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.542092231Z" level=info msg="API listen on [::]:2376"
	I0728 17:59:16.400691    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.000429486Z" level=info msg="Processing signal 'terminated'"
	I0728 17:59:16.400701    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001308281Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 17:59:16.400716    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001458767Z" level=info msg="Daemon shutdown complete"
	I0728 17:59:16.400730    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001520154Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 17:59:16.400739    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001554783Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 17:59:16.400746    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	I0728 17:59:16.400751    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	I0728 17:59:16.400757    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	I0728 17:59:16.400763    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.400770    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.044513551Z" level=info msg="Starting up"
	I0728 17:59:16.400830    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045165961Z" level=info msg="containerd not running, starting managed containerd"
	I0728 17:59:16.400840    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045779957Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1278
	I0728 17:59:16.400849    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.063819849Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 17:59:16.400859    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078790454Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 17:59:16.400881    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078861840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 17:59:16.400890    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078909723Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 17:59:16.400899    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078942873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400909    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078982590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.400918    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079016511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400934    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079177290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.400942    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079221517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400956    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079256669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.400968    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079285006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400977    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079322780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400989    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079417461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.401003    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.080975138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.401012    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081019961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.401028    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081189849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.401037    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081230906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 17:59:16.401046    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081268915Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 17:59:16.401054    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081307449Z" level=info msg="metadata content store policy set" policy=shared
	I0728 17:59:16.401063    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081514588Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 17:59:16.401072    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081566132Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 17:59:16.401081    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081599424Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 17:59:16.401092    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081630245Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 17:59:16.401101    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081660433Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 17:59:16.401110    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081711134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 17:59:16.401119    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081935254Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.401131    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082003682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.401140    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082071378Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 17:59:16.401150    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082106832Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 17:59:16.401160    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082141456Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401169    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082171351Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401178    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082199983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401199    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082230279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401209    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082259644Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401218    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082288397Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401228    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082316493Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401241    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082344152Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401289    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082389242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401303    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082427480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401312    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082458087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401322    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082487933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401330    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082526801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401339    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082561143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401348    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082590891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401357    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082620127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401366    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082660502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401376    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082695658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401385    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082725026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401394    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082756282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401403    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082785403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401412    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082815558Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 17:59:16.401420    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082849349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401428    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082880362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401437    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082908909Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 17:59:16.401446    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082981072Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 17:59:16.401460    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083071337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 17:59:16.401481    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083112046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 17:59:16.401492    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083141558Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 17:59:16.401593    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083173553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401606    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083204127Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 17:59:16.401620    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083234220Z" level=info msg="NRI interface is disabled by configuration."
	I0728 17:59:16.401628    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083428164Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 17:59:16.401637    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083514894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 17:59:16.401645    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083575557Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 17:59:16.401653    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083620565Z" level=info msg="containerd successfully booted in 0.020314s"
	I0728 17:59:16.401660    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.066266767Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 17:59:16.401668    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.253647977Z" level=info msg="Loading containers: start."
	I0728 17:59:16.401689    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.324491630Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 17:59:16.401703    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.382701703Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0728 17:59:16.401711    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.431702433Z" level=info msg="Loading containers: done."
	I0728 17:59:16.401721    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440864156Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 17:59:16.401730    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440919518Z" level=info msg="Daemon has completed initialization"
	I0728 17:59:16.401738    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461512437Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 17:59:16.401745    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461664145Z" level=info msg="API listen on [::]:2376"
	I0728 17:59:16.401751    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 systemd[1]: Started Docker Application Container Engine.
	I0728 17:59:16.401760    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260281303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401774    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260392108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401784    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260412572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401794    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260489352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401803    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276138579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401838    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276301037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401853    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276372584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401866    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276521849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401880    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.306891402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401894    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307066345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401904    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307094251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401914    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307168510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401924    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311048212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401938    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311102810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401948    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311112372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401958    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311392763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401968    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477710685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401977    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477915589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401987    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477973011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401997    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.478174177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402013    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494763986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402025    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494800644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402041    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494808461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402054    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494862529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402095    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502898043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402108    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502995270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402118    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503073968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402128    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503177666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402142    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514475802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402152    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514545542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402162    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514558720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402171    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514861602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402181    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352521512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402191    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352642496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402204    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352656093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402214    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352791637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402234    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466457350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402244    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466735785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402254    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466880396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402264    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.467238809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402274    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.588902278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402284    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589163604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402297    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589274541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402342    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589440546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402355    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647495237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402365    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647976971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402374    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648164904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402385    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648777321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402395    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931384339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402404    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931493404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402414    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931506590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402424    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931657800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402434    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162455309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402444    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162701812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402459    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162759021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402469    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.163278524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402481    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398231755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402491    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398332961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402502    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398346800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402512    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398679657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402523    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496031526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402533    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496097397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402626    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496109988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402640    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496427740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402650    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.034495755Z" level=info msg="shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	I0728 17:59:16.402661    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.034611180Z" level=info msg="ignoring event" container=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402671    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035089465Z" level=warning msg="cleaning up after shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	I0728 17:59:16.402679    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035158793Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.402690    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.111407350Z" level=info msg="ignoring event" container=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402700    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111763077Z" level=info msg="shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	I0728 17:59:16.402710    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111847732Z" level=warning msg="cleaning up after shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	I0728 17:59:16.402723    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111857207Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.402741    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.123414689Z" level=warning msg="cleanup warnings time=\"2024-07-29T00:58:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0728 17:59:16.402749    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.395458525Z" level=info msg="Processing signal 'terminated'"
	I0728 17:59:16.402760    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	I0728 17:59:16.402770    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448770229Z" level=info msg="shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	I0728 17:59:16.402780    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448865323Z" level=warning msg="cleaning up after shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	I0728 17:59:16.402788    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448875148Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.402799    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.449287739Z" level=info msg="ignoring event" container=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402813    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.499547099Z" level=info msg="ignoring event" container=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402822    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.499966665Z" level=info msg="shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	I0728 17:59:16.402832    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500680178Z" level=warning msg="cleaning up after shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	I0728 17:59:16.403003    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500689740Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403018    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.552833990Z" level=info msg="ignoring event" container=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403028    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553672267Z" level=info msg="shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	I0728 17:59:16.403038    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553743408Z" level=warning msg="cleaning up after shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	I0728 17:59:16.403046    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553752377Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403056    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553855742Z" level=info msg="shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	I0728 17:59:16.403066    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554337023Z" level=warning msg="cleaning up after shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	I0728 17:59:16.403081    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554382869Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403094    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.554596147Z" level=info msg="ignoring event" container=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403108    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.558112577Z" level=info msg="ignoring event" container=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403118    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558368677Z" level=info msg="shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	I0728 17:59:16.403129    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558831783Z" level=warning msg="cleaning up after shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	I0728 17:59:16.403140    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558877595Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403155    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.562511968Z" level=info msg="ignoring event" container=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403164    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562633349Z" level=info msg="shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	I0728 17:59:16.403175    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562699850Z" level=warning msg="cleaning up after shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	I0728 17:59:16.403183    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562708631Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403198    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.573772031Z" level=info msg="ignoring event" container=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403207    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574181868Z" level=info msg="shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	I0728 17:59:16.403218    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574256709Z" level=warning msg="cleaning up after shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	I0728 17:59:16.403226    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574265704Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403235    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584212617Z" level=info msg="shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	I0728 17:59:16.403247    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584332022Z" level=warning msg="cleaning up after shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	I0728 17:59:16.403255    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584390716Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403266    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589054926Z" level=info msg="ignoring event" container=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403278    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589169542Z" level=info msg="ignoring event" container=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403294    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589300211Z" level=info msg="ignoring event" container=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403304    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591696979Z" level=info msg="shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	I0728 17:59:16.403314    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591753738Z" level=warning msg="cleaning up after shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	I0728 17:59:16.403322    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591762049Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403333    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.592142540Z" level=info msg="ignoring event" container=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403342    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.593743099Z" level=info msg="shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	I0728 17:59:16.403356    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.594556393Z" level=info msg="ignoring event" container=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403368    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594895783Z" level=warning msg="cleaning up after shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	I0728 17:59:16.403376    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594940013Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403386    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594341936Z" level=info msg="shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	I0728 17:59:16.403396    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599531022Z" level=warning msg="cleaning up after shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	I0728 17:59:16.403405    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599564549Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403492    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594363171Z" level=info msg="shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	I0728 17:59:16.403510    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603697728Z" level=warning msg="cleaning up after shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	I0728 17:59:16.403517    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603706128Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403528    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1271]: time="2024-07-29T00:58:10.446248538Z" level=info msg="ignoring event" container=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403538    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446453571Z" level=info msg="shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	I0728 17:59:16.403548    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446483266Z" level=warning msg="cleaning up after shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	I0728 17:59:16.403555    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446489626Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403572    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.437850835Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924
	I0728 17:59:16.403584    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.461680643Z" level=info msg="ignoring event" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403593    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462134272Z" level=info msg="shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	I0728 17:59:16.403604    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462432578Z" level=warning msg="cleaning up after shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	I0728 17:59:16.403611    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462709085Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403621    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.480818399Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 17:59:16.403628    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481284133Z" level=info msg="Daemon shutdown complete"
	I0728 17:59:16.403638    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481351043Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 17:59:16.403648    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481513507Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 17:59:16.403658    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	I0728 17:59:16.403666    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	I0728 17:59:16.403673    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Consumed 2.317s CPU time.
	I0728 17:59:16.403686    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.403696    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 dockerd[3649]: time="2024-07-29T00:58:16.519764667Z" level=info msg="Starting up"
	I0728 17:59:16.403704    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 dockerd[3649]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0728 17:59:16.403716    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0728 17:59:16.403721    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0728 17:59:16.403735    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 systemd[1]: Failed to start Docker Application Container Engine.
	I0728 17:59:16.437925    2067 out.go:177] 
	W0728 17:59:16.458779    2067 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 00:57:13 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797365474Z" level=info msg="Starting up"
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797812498Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.799746278Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.817209839Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833006693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833027623Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833063048Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833073437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833127019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833187696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833331655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833366436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833378117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833385070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833441900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833582244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835042594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835101927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835241609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835284736Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835372957Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835438009Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837622113Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837721038Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837768434Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837808041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837840429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837936427Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838141537Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838308394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838347183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838384605Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838419232Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838451200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838482769Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838513376Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838546249Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838577148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838606171Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838634886Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838675799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838712449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838744137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838773905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838803063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838838392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838872381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838902742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838935507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838966734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838994870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839022479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839050538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839129561Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839170342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839201357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839229605Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839300959Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839344419Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839377180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839407452Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839436175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839464659Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839492819Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839668472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839754400Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839823157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839861606Z" level=info msg="containerd successfully booted in 0.023368s"
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.840311727Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.846796524Z" level=info msg="Loading containers: start."
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.931863378Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.016652031Z" level=info msg="Loading containers: done."
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023601347Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023702083Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056431503Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:15 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056529625Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.221309736Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:57:16 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222558264Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222867738Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222936309Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222951150Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:57:17 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:57:17 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:57:17 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251533872Z" level=info msg="Starting up"
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251992238Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.252592079Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=921
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.268000022Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283126898Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283245051Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283296543Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283329167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283372267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283410007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283528327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283565809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283595793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283624050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283661411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283760929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285373046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285426942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285565612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285609205Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285647249Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285681508Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285827566Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285877187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285910515Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285942139Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285973140Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286024088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286256555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286331375Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286365544Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286394955Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286424527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286453657Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286484741Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286516234Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286546601Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286579857Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286611348Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286641030Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286674739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286706453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286744971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286779178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286808354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286841128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286870616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286899451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286928600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286965950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286999059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287027761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287057255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287089564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287124670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287221056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287260008Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287333254Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287377987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287446465Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287477602Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287506315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287535151Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287565710Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287745237Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287832539Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287924952Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287968311Z" level=info msg="containerd successfully booted in 0.020373s"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.331881234Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.335683791Z" level=info msg="Loading containers: start."
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.404366470Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.461547560Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.503511121Z" level=info msg="Loading containers: done."
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521014736Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521083688Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.540963112Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:18 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.542092231Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.000429486Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001308281Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001458767Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001520154Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001554783Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:57:23 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:57:24 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:57:24 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:57:24 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.044513551Z" level=info msg="Starting up"
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045165961Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045779957Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1278
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.063819849Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078790454Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078861840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078909723Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078942873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078982590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079016511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079177290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079221517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079256669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079285006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079322780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079417461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.080975138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081019961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081189849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081230906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081268915Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081307449Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081514588Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081566132Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081599424Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081630245Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081660433Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081711134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081935254Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082003682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082071378Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082106832Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082141456Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082171351Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082199983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082230279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082259644Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082288397Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082316493Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082344152Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082389242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082427480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082458087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082487933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082526801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082561143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082590891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082620127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082660502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082695658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082725026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082756282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082785403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082815558Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082849349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082880362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082908909Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082981072Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083071337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083112046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083141558Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083173553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083204127Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083234220Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083428164Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083514894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083575557Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083620565Z" level=info msg="containerd successfully booted in 0.020314s"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.066266767Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.253647977Z" level=info msg="Loading containers: start."
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.324491630Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.382701703Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.431702433Z" level=info msg="Loading containers: done."
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440864156Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440919518Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461512437Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461664145Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:25 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260281303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260392108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260412572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260489352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276138579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276301037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276372584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276521849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.306891402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307066345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307094251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307168510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311048212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311102810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311112372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311392763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477710685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477915589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477973011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.478174177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494763986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494800644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494808461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494862529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502898043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502995270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503073968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503177666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514475802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514545542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514558720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514861602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352521512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352642496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352656093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352791637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466457350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466735785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466880396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.467238809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.588902278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589163604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589274541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589440546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647495237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647976971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648164904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648777321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931384339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931493404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931506590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931657800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162455309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162701812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162759021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.163278524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398231755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398332961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398346800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398679657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496031526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496097397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496109988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496427740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.034495755Z" level=info msg="shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.034611180Z" level=info msg="ignoring event" container=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035089465Z" level=warning msg="cleaning up after shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035158793Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.111407350Z" level=info msg="ignoring event" container=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111763077Z" level=info msg="shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111847732Z" level=warning msg="cleaning up after shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111857207Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.123414689Z" level=warning msg="cleanup warnings time=\"2024-07-29T00:58:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.395458525Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:58:05 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448770229Z" level=info msg="shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448865323Z" level=warning msg="cleaning up after shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448875148Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.449287739Z" level=info msg="ignoring event" container=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.499547099Z" level=info msg="ignoring event" container=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.499966665Z" level=info msg="shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500680178Z" level=warning msg="cleaning up after shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500689740Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.552833990Z" level=info msg="ignoring event" container=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553672267Z" level=info msg="shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553743408Z" level=warning msg="cleaning up after shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553752377Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553855742Z" level=info msg="shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554337023Z" level=warning msg="cleaning up after shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554382869Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.554596147Z" level=info msg="ignoring event" container=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.558112577Z" level=info msg="ignoring event" container=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558368677Z" level=info msg="shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558831783Z" level=warning msg="cleaning up after shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558877595Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.562511968Z" level=info msg="ignoring event" container=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562633349Z" level=info msg="shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562699850Z" level=warning msg="cleaning up after shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562708631Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.573772031Z" level=info msg="ignoring event" container=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574181868Z" level=info msg="shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574256709Z" level=warning msg="cleaning up after shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574265704Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584212617Z" level=info msg="shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584332022Z" level=warning msg="cleaning up after shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584390716Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589054926Z" level=info msg="ignoring event" container=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589169542Z" level=info msg="ignoring event" container=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589300211Z" level=info msg="ignoring event" container=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591696979Z" level=info msg="shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591753738Z" level=warning msg="cleaning up after shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591762049Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.592142540Z" level=info msg="ignoring event" container=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.593743099Z" level=info msg="shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.594556393Z" level=info msg="ignoring event" container=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594895783Z" level=warning msg="cleaning up after shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594940013Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594341936Z" level=info msg="shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599531022Z" level=warning msg="cleaning up after shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599564549Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594363171Z" level=info msg="shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603697728Z" level=warning msg="cleaning up after shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603706128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1271]: time="2024-07-29T00:58:10.446248538Z" level=info msg="ignoring event" container=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446453571Z" level=info msg="shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446483266Z" level=warning msg="cleaning up after shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446489626Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.437850835Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.461680643Z" level=info msg="ignoring event" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462134272Z" level=info msg="shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462432578Z" level=warning msg="cleaning up after shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462709085Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.480818399Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481284133Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481351043Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481513507Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:58:16 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Consumed 2.317s CPU time.
	Jul 29 00:58:16 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:58:16 functional-596000 dockerd[3649]: time="2024-07-29T00:58:16.519764667Z" level=info msg="Starting up"
	Jul 29 00:59:16 functional-596000 dockerd[3649]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 00:59:16 functional-596000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 00:57:13 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797365474Z" level=info msg="Starting up"
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797812498Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.799746278Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.817209839Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833006693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833027623Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833063048Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833073437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833127019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833187696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833331655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833366436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833378117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833385070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833441900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833582244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835042594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835101927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835241609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835284736Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835372957Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835438009Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837622113Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837721038Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837768434Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837808041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837840429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837936427Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838141537Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838308394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838347183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838384605Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838419232Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838451200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838482769Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838513376Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838546249Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838577148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838606171Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838634886Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838675799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838712449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838744137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838773905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838803063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838838392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838872381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838902742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838935507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838966734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838994870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839022479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839050538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839129561Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839170342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839201357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839229605Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839300959Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839344419Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839377180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839407452Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839436175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839464659Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839492819Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839668472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839754400Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839823157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839861606Z" level=info msg="containerd successfully booted in 0.023368s"
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.840311727Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.846796524Z" level=info msg="Loading containers: start."
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.931863378Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.016652031Z" level=info msg="Loading containers: done."
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023601347Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023702083Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056431503Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:15 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056529625Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.221309736Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:57:16 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222558264Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222867738Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222936309Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222951150Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:57:17 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:57:17 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:57:17 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251533872Z" level=info msg="Starting up"
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251992238Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.252592079Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=921
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.268000022Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283126898Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283245051Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283296543Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283329167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283372267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283410007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283528327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283565809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283595793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283624050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283661411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283760929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285373046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285426942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285565612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285609205Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285647249Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285681508Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285827566Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285877187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285910515Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285942139Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285973140Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286024088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286256555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286331375Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286365544Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286394955Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286424527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286453657Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286484741Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286516234Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286546601Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286579857Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286611348Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286641030Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286674739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286706453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286744971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286779178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286808354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286841128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286870616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286899451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286928600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286965950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286999059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287027761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287057255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287089564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287124670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287221056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287260008Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287333254Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287377987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287446465Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287477602Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287506315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287535151Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287565710Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287745237Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287832539Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287924952Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287968311Z" level=info msg="containerd successfully booted in 0.020373s"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.331881234Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.335683791Z" level=info msg="Loading containers: start."
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.404366470Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.461547560Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.503511121Z" level=info msg="Loading containers: done."
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521014736Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521083688Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.540963112Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:18 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.542092231Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.000429486Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001308281Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001458767Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001520154Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001554783Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:57:23 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:57:24 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:57:24 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:57:24 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.044513551Z" level=info msg="Starting up"
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045165961Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045779957Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1278
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.063819849Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078790454Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078861840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078909723Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078942873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078982590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079016511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079177290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079221517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079256669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079285006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079322780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079417461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.080975138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081019961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081189849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081230906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081268915Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081307449Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081514588Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081566132Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081599424Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081630245Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081660433Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081711134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081935254Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082003682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082071378Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082106832Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082141456Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082171351Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082199983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082230279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082259644Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082288397Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082316493Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082344152Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082389242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082427480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082458087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082487933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082526801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082561143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082590891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082620127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082660502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082695658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082725026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082756282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082785403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082815558Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082849349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082880362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082908909Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082981072Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083071337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083112046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083141558Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083173553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083204127Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083234220Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083428164Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083514894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083575557Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083620565Z" level=info msg="containerd successfully booted in 0.020314s"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.066266767Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.253647977Z" level=info msg="Loading containers: start."
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.324491630Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.382701703Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.431702433Z" level=info msg="Loading containers: done."
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440864156Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440919518Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461512437Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461664145Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:25 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260281303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260392108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260412572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260489352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276138579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276301037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276372584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276521849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.306891402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307066345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307094251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307168510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311048212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311102810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311112372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311392763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477710685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477915589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477973011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.478174177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494763986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494800644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494808461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494862529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502898043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502995270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503073968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503177666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514475802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514545542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514558720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514861602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352521512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352642496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352656093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352791637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466457350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466735785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466880396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.467238809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.588902278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589163604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589274541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589440546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647495237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647976971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648164904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648777321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931384339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931493404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931506590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931657800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162455309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162701812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162759021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.163278524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398231755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398332961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398346800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398679657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496031526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496097397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496109988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496427740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.034495755Z" level=info msg="shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.034611180Z" level=info msg="ignoring event" container=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035089465Z" level=warning msg="cleaning up after shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035158793Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.111407350Z" level=info msg="ignoring event" container=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111763077Z" level=info msg="shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111847732Z" level=warning msg="cleaning up after shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111857207Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.123414689Z" level=warning msg="cleanup warnings time=\"2024-07-29T00:58:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.395458525Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:58:05 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448770229Z" level=info msg="shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448865323Z" level=warning msg="cleaning up after shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448875148Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.449287739Z" level=info msg="ignoring event" container=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.499547099Z" level=info msg="ignoring event" container=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.499966665Z" level=info msg="shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500680178Z" level=warning msg="cleaning up after shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500689740Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.552833990Z" level=info msg="ignoring event" container=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553672267Z" level=info msg="shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553743408Z" level=warning msg="cleaning up after shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553752377Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553855742Z" level=info msg="shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554337023Z" level=warning msg="cleaning up after shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554382869Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.554596147Z" level=info msg="ignoring event" container=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.558112577Z" level=info msg="ignoring event" container=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558368677Z" level=info msg="shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558831783Z" level=warning msg="cleaning up after shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558877595Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.562511968Z" level=info msg="ignoring event" container=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562633349Z" level=info msg="shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562699850Z" level=warning msg="cleaning up after shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562708631Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.573772031Z" level=info msg="ignoring event" container=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574181868Z" level=info msg="shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574256709Z" level=warning msg="cleaning up after shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574265704Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584212617Z" level=info msg="shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584332022Z" level=warning msg="cleaning up after shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584390716Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589054926Z" level=info msg="ignoring event" container=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589169542Z" level=info msg="ignoring event" container=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589300211Z" level=info msg="ignoring event" container=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591696979Z" level=info msg="shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591753738Z" level=warning msg="cleaning up after shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591762049Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.592142540Z" level=info msg="ignoring event" container=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.593743099Z" level=info msg="shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.594556393Z" level=info msg="ignoring event" container=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594895783Z" level=warning msg="cleaning up after shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594940013Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594341936Z" level=info msg="shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599531022Z" level=warning msg="cleaning up after shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599564549Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594363171Z" level=info msg="shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603697728Z" level=warning msg="cleaning up after shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603706128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1271]: time="2024-07-29T00:58:10.446248538Z" level=info msg="ignoring event" container=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446453571Z" level=info msg="shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446483266Z" level=warning msg="cleaning up after shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446489626Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.437850835Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.461680643Z" level=info msg="ignoring event" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462134272Z" level=info msg="shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462432578Z" level=warning msg="cleaning up after shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462709085Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.480818399Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481284133Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481351043Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481513507Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:58:16 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Consumed 2.317s CPU time.
	Jul 29 00:58:16 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:58:16 functional-596000 dockerd[3649]: time="2024-07-29T00:58:16.519764667Z" level=info msg="Starting up"
	Jul 29 00:59:16 functional-596000 dockerd[3649]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 00:59:16 functional-596000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0728 17:59:16.459445    2067 out.go:239] * 
	* 
	W0728 17:59:16.460660    2067 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 17:59:16.543445    2067 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-amd64 start -p functional-596000 --alsologtostderr -v=8": exit status 90
functional_test.go:663: soft start took 1m13.493556092s for "functional-596000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-596000 -n functional-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-596000 -n functional-596000: exit status 2 (145.359521ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 logs -n 25
E0728 18:00:50.005404    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-596000 logs -n 25: (2m0.422272271s)
helpers_test.go:252: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | enable headlamp                                                                             | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:52 PDT | 28 Jul 24 17:52 PDT |
	|         | -p addons-967000                                                                            |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                   |         |         |                     |                     |
	| ssh     | addons-967000 ssh cat                                                                       | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:52 PDT | 28 Jul 24 17:52 PDT |
	|         | /opt/local-path-provisioner/pvc-763f0b3f-3a84-408e-988e-e89dc26ea2ee_default_test-pvc/file1 |                   |         |         |                     |                     |
	| addons  | addons-967000 addons disable                                                                | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:52 PDT | 28 Jul 24 17:53 PDT |
	|         | storage-provisioner-rancher                                                                 |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                   |         |         |                     |                     |
	| addons  | addons-967000 addons disable                                                                | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:53 PDT | 28 Jul 24 17:53 PDT |
	|         | headlamp --alsologtostderr                                                                  |                   |         |         |                     |                     |
	|         | -v=1                                                                                        |                   |         |         |                     |                     |
	| stop    | -p addons-967000                                                                            | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:53 PDT | 28 Jul 24 17:53 PDT |
	| addons  | enable dashboard -p                                                                         | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:53 PDT | 28 Jul 24 17:53 PDT |
	|         | addons-967000                                                                               |                   |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:53 PDT | 28 Jul 24 17:53 PDT |
	|         | addons-967000                                                                               |                   |         |         |                     |                     |
	| addons  | disable gvisor -p                                                                           | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:53 PDT | 28 Jul 24 17:53 PDT |
	|         | addons-967000                                                                               |                   |         |         |                     |                     |
	| delete  | -p addons-967000                                                                            | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:53 PDT | 28 Jul 24 17:53 PDT |
	| start   | -p nospam-292000 -n=1 --memory=2250 --wait=false                                            | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:53 PDT | 28 Jul 24 17:54 PDT |
	|         | --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                    |                   |         |         |                     |                     |
	|         | --driver=hyperkit                                                                           |                   |         |         |                     |                     |
	| start   | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT |                     |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | start --dry-run                                                                             |                   |         |         |                     |                     |
	| start   | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT |                     |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | start --dry-run                                                                             |                   |         |         |                     |                     |
	| start   | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT |                     |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | start --dry-run                                                                             |                   |         |         |                     |                     |
	| pause   | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | pause                                                                                       |                   |         |         |                     |                     |
	| pause   | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | pause                                                                                       |                   |         |         |                     |                     |
	| pause   | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | pause                                                                                       |                   |         |         |                     |                     |
	| unpause | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | unpause                                                                                     |                   |         |         |                     |                     |
	| unpause | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | unpause                                                                                     |                   |         |         |                     |                     |
	| unpause | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | unpause                                                                                     |                   |         |         |                     |                     |
	| stop    | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | stop                                                                                        |                   |         |         |                     |                     |
	| stop    | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:55 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | stop                                                                                        |                   |         |         |                     |                     |
	| stop    | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:55 PDT | 28 Jul 24 17:56 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | stop                                                                                        |                   |         |         |                     |                     |
	| delete  | -p nospam-292000                                                                            | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:57 PDT |
	| start   | -p functional-596000                                                                        | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:58 PDT |
	|         | --memory=4000                                                                               |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                                       |                   |         |         |                     |                     |
	|         | --wait=all --driver=hyperkit                                                                |                   |         |         |                     |                     |
	| start   | -p functional-596000                                                                        | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 17:58 PDT |                     |
	|         | --alsologtostderr -v=8                                                                      |                   |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/28 17:58:03
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 17:58:03.181908    2067 out.go:291] Setting OutFile to fd 1 ...
	I0728 17:58:03.182088    2067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:58:03.182094    2067 out.go:304] Setting ErrFile to fd 2...
	I0728 17:58:03.182098    2067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:58:03.182279    2067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 17:58:03.183681    2067 out.go:298] Setting JSON to false
	I0728 17:58:03.206318    2067 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1654,"bootTime":1722213029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 17:58:03.206416    2067 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 17:58:03.227676    2067 out.go:177] * [functional-596000] minikube v1.33.1 on Darwin 14.5
	I0728 17:58:03.269722    2067 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 17:58:03.269783    2067 notify.go:220] Checking for updates...
	I0728 17:58:03.312443    2067 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 17:58:03.333527    2067 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 17:58:03.354627    2067 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 17:58:03.375824    2067 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 17:58:03.396566    2067 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 17:58:03.417974    2067 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 17:58:03.418146    2067 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 17:58:03.418798    2067 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:58:03.418872    2067 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 17:58:03.428211    2067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50175
	I0728 17:58:03.428568    2067 main.go:141] libmachine: () Calling .GetVersion
	I0728 17:58:03.428964    2067 main.go:141] libmachine: Using API Version  1
	I0728 17:58:03.428979    2067 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 17:58:03.429182    2067 main.go:141] libmachine: () Calling .GetMachineName
	I0728 17:58:03.429300    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:03.457784    2067 out.go:177] * Using the hyperkit driver based on existing profile
	I0728 17:58:03.499269    2067 start.go:297] selected driver: hyperkit
	I0728 17:58:03.499285    2067 start.go:901] validating driver "hyperkit" against &{Name:functional-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:58:03.499388    2067 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 17:58:03.499488    2067 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:58:03.499604    2067 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 17:58:03.508339    2067 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 17:58:03.512503    2067 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:58:03.512529    2067 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 17:58:03.515340    2067 cni.go:84] Creating CNI manager for ""
	I0728 17:58:03.515390    2067 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 17:58:03.515469    2067 start.go:340] cluster config:
	{Name:functional-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-596000 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:58:03.515565    2067 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:58:03.573559    2067 out.go:177] * Starting "functional-596000" primary control-plane node in "functional-596000" cluster
	I0728 17:58:03.610472    2067 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 17:58:03.610521    2067 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 17:58:03.610545    2067 cache.go:56] Caching tarball of preloaded images
	I0728 17:58:03.610741    2067 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 17:58:03.610759    2067 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 17:58:03.610882    2067 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/config.json ...
	I0728 17:58:03.611579    2067 start.go:360] acquireMachinesLock for functional-596000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 17:58:03.611656    2067 start.go:364] duration metric: took 61.959µs to acquireMachinesLock for "functional-596000"
	I0728 17:58:03.611681    2067 start.go:96] Skipping create...Using existing machine configuration
	I0728 17:58:03.611696    2067 fix.go:54] fixHost starting: 
	I0728 17:58:03.612004    2067 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:58:03.612033    2067 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 17:58:03.621321    2067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50177
	I0728 17:58:03.621639    2067 main.go:141] libmachine: () Calling .GetVersion
	I0728 17:58:03.622002    2067 main.go:141] libmachine: Using API Version  1
	I0728 17:58:03.622022    2067 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 17:58:03.622230    2067 main.go:141] libmachine: () Calling .GetMachineName
	I0728 17:58:03.622342    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:03.622436    2067 main.go:141] libmachine: (functional-596000) Calling .GetState
	I0728 17:58:03.622567    2067 main.go:141] libmachine: (functional-596000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 17:58:03.622651    2067 main.go:141] libmachine: (functional-596000) DBG | hyperkit pid from json: 2051
	I0728 17:58:03.623593    2067 fix.go:112] recreateIfNeeded on functional-596000: state=Running err=<nil>
	W0728 17:58:03.623608    2067 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 17:58:03.644584    2067 out.go:177] * Updating the running hyperkit "functional-596000" VM ...
	I0728 17:58:03.686410    2067 machine.go:94] provisionDockerMachine start ...
	I0728 17:58:03.686443    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:03.686748    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.686992    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.687220    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.687442    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.687672    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.687922    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:03.688298    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:03.688318    2067 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 17:58:03.737887    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-596000
	
	I0728 17:58:03.737901    2067 main.go:141] libmachine: (functional-596000) Calling .GetMachineName
	I0728 17:58:03.738050    2067 buildroot.go:166] provisioning hostname "functional-596000"
	I0728 17:58:03.738062    2067 main.go:141] libmachine: (functional-596000) Calling .GetMachineName
	I0728 17:58:03.738158    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.738247    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.738335    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.738433    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.738522    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.738660    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:03.738789    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:03.738804    2067 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-596000 && echo "functional-596000" | sudo tee /etc/hostname
	I0728 17:58:03.799001    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-596000
	
	I0728 17:58:03.799026    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.799176    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.799262    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.799342    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.799457    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.799594    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:03.799743    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:03.799755    2067 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-596000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-596000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-596000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 17:58:03.848940    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 17:58:03.848963    2067 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 17:58:03.848979    2067 buildroot.go:174] setting up certificates
	I0728 17:58:03.848994    2067 provision.go:84] configureAuth start
	I0728 17:58:03.849001    2067 main.go:141] libmachine: (functional-596000) Calling .GetMachineName
	I0728 17:58:03.849120    2067 main.go:141] libmachine: (functional-596000) Calling .GetIP
	I0728 17:58:03.849210    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.849295    2067 provision.go:143] copyHostCerts
	I0728 17:58:03.849323    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 17:58:03.849389    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 17:58:03.849397    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 17:58:03.849587    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 17:58:03.849823    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 17:58:03.849865    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 17:58:03.849873    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 17:58:03.850017    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 17:58:03.850186    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 17:58:03.850225    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 17:58:03.850230    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 17:58:03.850308    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 17:58:03.850449    2067 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.functional-596000 san=[127.0.0.1 192.169.0.4 functional-596000 localhost minikube]
	I0728 17:58:03.967853    2067 provision.go:177] copyRemoteCerts
	I0728 17:58:03.967921    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 17:58:03.967939    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.968094    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.968192    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.968299    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.968393    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.001708    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 17:58:04.001790    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 17:58:04.022827    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 17:58:04.022891    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0728 17:58:04.042748    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 17:58:04.042810    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 17:58:04.062503    2067 provision.go:87] duration metric: took 213.493856ms to configureAuth
	I0728 17:58:04.062518    2067 buildroot.go:189] setting minikube options for container-runtime
	I0728 17:58:04.062657    2067 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 17:58:04.062674    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.062814    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.062907    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.062999    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.063076    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.063159    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.063261    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.063390    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.063398    2067 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 17:58:04.115857    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 17:58:04.115869    2067 buildroot.go:70] root file system type: tmpfs
	I0728 17:58:04.115942    2067 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 17:58:04.115956    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.116086    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.116177    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.116266    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.116359    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.116490    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.116628    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.116676    2067 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 17:58:04.180807    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 17:58:04.180831    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.180961    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.181052    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.181141    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.181233    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.181369    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.181514    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.181526    2067 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 17:58:04.236936    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 17:58:04.236950    2067 machine.go:97] duration metric: took 550.516869ms to provisionDockerMachine
	I0728 17:58:04.236962    2067 start.go:293] postStartSetup for "functional-596000" (driver="hyperkit")
	I0728 17:58:04.236969    2067 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 17:58:04.236980    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.237151    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 17:58:04.237167    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.237259    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.237356    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.237450    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.237524    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.269248    2067 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 17:58:04.272370    2067 command_runner.go:130] > NAME=Buildroot
	I0728 17:58:04.272378    2067 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0728 17:58:04.272381    2067 command_runner.go:130] > ID=buildroot
	I0728 17:58:04.272385    2067 command_runner.go:130] > VERSION_ID=2023.02.9
	I0728 17:58:04.272389    2067 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0728 17:58:04.272475    2067 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 17:58:04.272491    2067 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 17:58:04.272591    2067 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 17:58:04.272782    2067 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 17:58:04.272789    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /etc/ssl/certs/15332.pem
	I0728 17:58:04.272981    2067 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/test/nested/copy/1533/hosts -> hosts in /etc/test/nested/copy/1533
	I0728 17:58:04.272987    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/test/nested/copy/1533/hosts -> /etc/test/nested/copy/1533/hosts
	I0728 17:58:04.273049    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1533
	I0728 17:58:04.281301    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 17:58:04.301144    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/test/nested/copy/1533/hosts --> /etc/test/nested/copy/1533/hosts (40 bytes)
	I0728 17:58:04.321194    2067 start.go:296] duration metric: took 84.223294ms for postStartSetup
	I0728 17:58:04.321219    2067 fix.go:56] duration metric: took 709.52621ms for fixHost
	I0728 17:58:04.321235    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.321378    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.321458    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.321552    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.321634    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.321767    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.321915    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.321922    2067 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0728 17:58:04.372672    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722214684.480661733
	
	I0728 17:58:04.372686    2067 fix.go:216] guest clock: 1722214684.480661733
	I0728 17:58:04.372691    2067 fix.go:229] Guest: 2024-07-28 17:58:04.480661733 -0700 PDT Remote: 2024-07-28 17:58:04.321226 -0700 PDT m=+1.173910037 (delta=159.435733ms)
	I0728 17:58:04.372708    2067 fix.go:200] guest clock delta is within tolerance: 159.435733ms
	I0728 17:58:04.372712    2067 start.go:83] releasing machines lock for "functional-596000", held for 761.044153ms
	I0728 17:58:04.372731    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.372854    2067 main.go:141] libmachine: (functional-596000) Calling .GetIP
	I0728 17:58:04.372965    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.373253    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.373372    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.373450    2067 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 17:58:04.373485    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.373513    2067 ssh_runner.go:195] Run: cat /version.json
	I0728 17:58:04.373523    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.373581    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.373615    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.373688    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.373706    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.373784    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.373796    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.373868    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.373891    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.444486    2067 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0728 17:58:04.445070    2067 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0728 17:58:04.445228    2067 ssh_runner.go:195] Run: systemctl --version
	I0728 17:58:04.449759    2067 command_runner.go:130] > systemd 252 (252)
	I0728 17:58:04.449776    2067 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0728 17:58:04.450022    2067 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0728 17:58:04.454258    2067 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0728 17:58:04.454279    2067 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 17:58:04.454319    2067 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 17:58:04.462388    2067 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0728 17:58:04.462398    2067 start.go:495] detecting cgroup driver to use...
	I0728 17:58:04.462514    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 17:58:04.477917    2067 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0728 17:58:04.478151    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 17:58:04.487863    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 17:58:04.497357    2067 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 17:58:04.497404    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 17:58:04.507132    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 17:58:04.516475    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 17:58:04.526165    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 17:58:04.535504    2067 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 17:58:04.545511    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 17:58:04.554731    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 17:58:04.563973    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 17:58:04.573675    2067 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 17:58:04.582020    2067 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0728 17:58:04.582227    2067 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 17:58:04.591135    2067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 17:58:04.729887    2067 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 17:58:04.749030    2067 start.go:495] detecting cgroup driver to use...
	I0728 17:58:04.749107    2067 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 17:58:04.763070    2067 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0728 17:58:04.763645    2067 command_runner.go:130] > [Unit]
	I0728 17:58:04.763655    2067 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 17:58:04.763659    2067 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 17:58:04.763664    2067 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0728 17:58:04.763668    2067 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0728 17:58:04.763673    2067 command_runner.go:130] > StartLimitBurst=3
	I0728 17:58:04.763676    2067 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 17:58:04.763680    2067 command_runner.go:130] > [Service]
	I0728 17:58:04.763686    2067 command_runner.go:130] > Type=notify
	I0728 17:58:04.763691    2067 command_runner.go:130] > Restart=on-failure
	I0728 17:58:04.763696    2067 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 17:58:04.763711    2067 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 17:58:04.763718    2067 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 17:58:04.763723    2067 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 17:58:04.763729    2067 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 17:58:04.763734    2067 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 17:58:04.763741    2067 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 17:58:04.763754    2067 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 17:58:04.763760    2067 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 17:58:04.763763    2067 command_runner.go:130] > ExecStart=
	I0728 17:58:04.763777    2067 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0728 17:58:04.763782    2067 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 17:58:04.763788    2067 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 17:58:04.763795    2067 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 17:58:04.763798    2067 command_runner.go:130] > LimitNOFILE=infinity
	I0728 17:58:04.763802    2067 command_runner.go:130] > LimitNPROC=infinity
	I0728 17:58:04.763807    2067 command_runner.go:130] > LimitCORE=infinity
	I0728 17:58:04.763811    2067 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 17:58:04.763815    2067 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 17:58:04.763824    2067 command_runner.go:130] > TasksMax=infinity
	I0728 17:58:04.763828    2067 command_runner.go:130] > TimeoutStartSec=0
	I0728 17:58:04.763833    2067 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 17:58:04.763837    2067 command_runner.go:130] > Delegate=yes
	I0728 17:58:04.763842    2067 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 17:58:04.763846    2067 command_runner.go:130] > KillMode=process
	I0728 17:58:04.763849    2067 command_runner.go:130] > [Install]
	I0728 17:58:04.763857    2067 command_runner.go:130] > WantedBy=multi-user.target
	I0728 17:58:04.763963    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 17:58:04.775171    2067 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 17:58:04.803670    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 17:58:04.815918    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 17:58:04.827728    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 17:58:04.842925    2067 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 17:58:04.843170    2067 ssh_runner.go:195] Run: which cri-dockerd
	I0728 17:58:04.846059    2067 command_runner.go:130] > /usr/bin/cri-dockerd
	I0728 17:58:04.846245    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 17:58:04.854364    2067 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 17:58:04.868292    2067 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 17:58:05.006256    2067 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 17:58:05.135902    2067 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 17:58:05.135971    2067 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 17:58:05.150351    2067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 17:58:05.274841    2067 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 17:59:16.388765    2067 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0728 17:59:16.388780    2067 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0728 17:59:16.388791    2067 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.113588859s)
	I0728 17:59:16.388851    2067 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0728 17:59:16.398150    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.398166    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797365474Z" level=info msg="Starting up"
	I0728 17:59:16.398196    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797812498Z" level=info msg="containerd not running, starting managed containerd"
	I0728 17:59:16.398214    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.799746278Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	I0728 17:59:16.398223    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.817209839Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 17:59:16.398235    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833006693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 17:59:16.398246    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833027623Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 17:59:16.398255    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833063048Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 17:59:16.398264    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833073437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398274    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833127019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398283    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833187696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398302    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833331655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398312    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833366436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398323    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833378117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398332    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833385070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398342    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833441900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398350    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833582244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398364    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835042594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398374    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835101927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398432    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835241609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398446    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835284736Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 17:59:16.398456    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835372957Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 17:59:16.398464    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835438009Z" level=info msg="metadata content store policy set" policy=shared
	I0728 17:59:16.398472    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837622113Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 17:59:16.398481    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837721038Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 17:59:16.398490    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837768434Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 17:59:16.398500    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837808041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 17:59:16.398509    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837840429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 17:59:16.398518    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837936427Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 17:59:16.398527    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838141537Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.398536    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838308394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.398544    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838347183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 17:59:16.398554    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838384605Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 17:59:16.398566    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838419232Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398576    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838451200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398585    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838482769Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398594    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838513376Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398604    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838546249Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398614    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838577148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398624    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838606171Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398900    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838634886Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398913    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838675799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398921    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838712449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398929    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838744137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398938    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838773905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398946    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838803063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398955    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838838392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398963    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838872381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398971    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838902742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398980    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838935507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398994    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838966734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399003    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838994870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399011    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839022479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399019    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839050538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399028    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839129561Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 17:59:16.399037    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839170342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399045    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839201357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399054    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839229605Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 17:59:16.399063    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839300959Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 17:59:16.399075    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839344419Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 17:59:16.399084    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839377180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 17:59:16.399288    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839407452Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 17:59:16.399301    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839436175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399321    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839464659Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 17:59:16.399330    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839492819Z" level=info msg="NRI interface is disabled by configuration."
	I0728 17:59:16.399339    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839668472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 17:59:16.399347    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839754400Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 17:59:16.399355    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839823157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 17:59:16.399363    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839861606Z" level=info msg="containerd successfully booted in 0.023368s"
	I0728 17:59:16.399371    2067 command_runner.go:130] > Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.840311727Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 17:59:16.399378    2067 command_runner.go:130] > Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.846796524Z" level=info msg="Loading containers: start."
	I0728 17:59:16.399399    2067 command_runner.go:130] > Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.931863378Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 17:59:16.399408    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.016652031Z" level=info msg="Loading containers: done."
	I0728 17:59:16.399429    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023601347Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 17:59:16.399457    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023702083Z" level=info msg="Daemon has completed initialization"
	I0728 17:59:16.399464    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056431503Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 17:59:16.399492    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 systemd[1]: Started Docker Application Container Engine.
	I0728 17:59:16.399501    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056529625Z" level=info msg="API listen on [::]:2376"
	I0728 17:59:16.399507    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.221309736Z" level=info msg="Processing signal 'terminated'"
	I0728 17:59:16.399513    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	I0728 17:59:16.399522    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222558264Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 17:59:16.399528    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222867738Z" level=info msg="Daemon shutdown complete"
	I0728 17:59:16.399545    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222936309Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 17:59:16.399553    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222951150Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 17:59:16.399559    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	I0728 17:59:16.399564    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	I0728 17:59:16.399574    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.399581    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251533872Z" level=info msg="Starting up"
	I0728 17:59:16.399696    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251992238Z" level=info msg="containerd not running, starting managed containerd"
	I0728 17:59:16.399709    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.252592079Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=921
	I0728 17:59:16.399718    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.268000022Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 17:59:16.399726    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283126898Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 17:59:16.399735    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283245051Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 17:59:16.399744    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283296543Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 17:59:16.399753    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283329167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399767    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283372267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399777    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283410007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399792    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283528327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399801    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283565809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399812    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283595793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399821    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283624050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399831    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283661411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399840    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283760929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399853    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285373046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399863    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285426942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399876    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285565612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399910    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285609205Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 17:59:16.399925    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285647249Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 17:59:16.399934    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285681508Z" level=info msg="metadata content store policy set" policy=shared
	I0728 17:59:16.399943    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285827566Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 17:59:16.399952    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285877187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 17:59:16.399961    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285910515Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 17:59:16.399969    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285942139Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 17:59:16.399980    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285973140Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 17:59:16.399991    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286024088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 17:59:16.400000    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286256555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.400009    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286331375Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.400021    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286365544Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 17:59:16.400031    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286394955Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 17:59:16.400040    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286424527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400050    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286453657Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400059    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286484741Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400068    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286516234Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400077    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286546601Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400086    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286579857Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400096    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286611348Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400105    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286641030Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400173    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286674739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400185    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286706453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400194    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286744971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400203    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286779178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400216    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286808354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400225    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286841128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400234    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286870616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400243    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286899451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400251    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286928600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400260    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286965950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400269    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286999059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400278    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287027761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400286    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287057255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400295    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287089564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 17:59:16.400304    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287124670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400312    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287221056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400321    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287260008Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 17:59:16.400332    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287333254Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 17:59:16.400344    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287377987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 17:59:16.400354    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287446465Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 17:59:16.400365    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287477602Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 17:59:16.400375    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287506315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400543    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287535151Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 17:59:16.400553    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287565710Z" level=info msg="NRI interface is disabled by configuration."
	I0728 17:59:16.400561    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287745237Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 17:59:16.400572    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287832539Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 17:59:16.400580    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287924952Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 17:59:16.400588    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287968311Z" level=info msg="containerd successfully booted in 0.020373s"
	I0728 17:59:16.400596    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.331881234Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 17:59:16.400604    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.335683791Z" level=info msg="Loading containers: start."
	I0728 17:59:16.400623    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.404366470Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 17:59:16.400634    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.461547560Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0728 17:59:16.400642    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.503511121Z" level=info msg="Loading containers: done."
	I0728 17:59:16.400652    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521014736Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 17:59:16.400659    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521083688Z" level=info msg="Daemon has completed initialization"
	I0728 17:59:16.400669    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.540963112Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 17:59:16.400676    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 systemd[1]: Started Docker Application Container Engine.
	I0728 17:59:16.400683    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.542092231Z" level=info msg="API listen on [::]:2376"
	I0728 17:59:16.400691    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.000429486Z" level=info msg="Processing signal 'terminated'"
	I0728 17:59:16.400701    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001308281Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 17:59:16.400716    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001458767Z" level=info msg="Daemon shutdown complete"
	I0728 17:59:16.400730    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001520154Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 17:59:16.400739    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001554783Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 17:59:16.400746    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	I0728 17:59:16.400751    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	I0728 17:59:16.400757    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	I0728 17:59:16.400763    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.400770    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.044513551Z" level=info msg="Starting up"
	I0728 17:59:16.400830    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045165961Z" level=info msg="containerd not running, starting managed containerd"
	I0728 17:59:16.400840    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045779957Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1278
	I0728 17:59:16.400849    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.063819849Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 17:59:16.400859    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078790454Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 17:59:16.400881    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078861840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 17:59:16.400890    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078909723Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 17:59:16.400899    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078942873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400909    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078982590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.400918    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079016511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400934    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079177290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.400942    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079221517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400956    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079256669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.400968    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079285006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400977    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079322780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400989    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079417461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.401003    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.080975138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.401012    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081019961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.401028    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081189849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.401037    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081230906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 17:59:16.401046    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081268915Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 17:59:16.401054    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081307449Z" level=info msg="metadata content store policy set" policy=shared
	I0728 17:59:16.401063    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081514588Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 17:59:16.401072    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081566132Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 17:59:16.401081    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081599424Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 17:59:16.401092    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081630245Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 17:59:16.401101    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081660433Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 17:59:16.401110    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081711134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 17:59:16.401119    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081935254Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.401131    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082003682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.401140    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082071378Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 17:59:16.401150    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082106832Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 17:59:16.401160    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082141456Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401169    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082171351Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401178    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082199983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401199    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082230279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401209    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082259644Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401218    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082288397Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401228    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082316493Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401241    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082344152Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401289    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082389242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401303    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082427480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401312    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082458087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401322    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082487933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401330    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082526801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401339    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082561143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401348    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082590891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401357    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082620127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401366    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082660502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401376    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082695658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401385    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082725026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401394    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082756282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401403    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082785403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401412    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082815558Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 17:59:16.401420    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082849349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401428    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082880362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401437    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082908909Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 17:59:16.401446    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082981072Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 17:59:16.401460    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083071337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 17:59:16.401481    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083112046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 17:59:16.401492    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083141558Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 17:59:16.401593    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083173553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401606    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083204127Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 17:59:16.401620    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083234220Z" level=info msg="NRI interface is disabled by configuration."
	I0728 17:59:16.401628    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083428164Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 17:59:16.401637    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083514894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 17:59:16.401645    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083575557Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 17:59:16.401653    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083620565Z" level=info msg="containerd successfully booted in 0.020314s"
	I0728 17:59:16.401660    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.066266767Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 17:59:16.401668    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.253647977Z" level=info msg="Loading containers: start."
	I0728 17:59:16.401689    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.324491630Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 17:59:16.401703    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.382701703Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0728 17:59:16.401711    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.431702433Z" level=info msg="Loading containers: done."
	I0728 17:59:16.401721    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440864156Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 17:59:16.401730    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440919518Z" level=info msg="Daemon has completed initialization"
	I0728 17:59:16.401738    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461512437Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 17:59:16.401745    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461664145Z" level=info msg="API listen on [::]:2376"
	I0728 17:59:16.401751    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 systemd[1]: Started Docker Application Container Engine.
	I0728 17:59:16.401760    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260281303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401774    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260392108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401784    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260412572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401794    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260489352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401803    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276138579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401838    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276301037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401853    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276372584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401866    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276521849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401880    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.306891402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401894    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307066345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401904    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307094251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401914    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307168510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401924    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311048212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401938    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311102810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401948    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311112372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401958    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311392763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401968    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477710685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401977    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477915589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401987    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477973011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401997    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.478174177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402013    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494763986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402025    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494800644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402041    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494808461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402054    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494862529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402095    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502898043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402108    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502995270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402118    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503073968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402128    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503177666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402142    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514475802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402152    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514545542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402162    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514558720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402171    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514861602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402181    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352521512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402191    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352642496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402204    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352656093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402214    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352791637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402234    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466457350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402244    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466735785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402254    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466880396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402264    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.467238809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402274    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.588902278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402284    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589163604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402297    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589274541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402342    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589440546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402355    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647495237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402365    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647976971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402374    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648164904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402385    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648777321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402395    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931384339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402404    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931493404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402414    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931506590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402424    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931657800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402434    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162455309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402444    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162701812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402459    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162759021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402469    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.163278524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402481    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398231755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402491    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398332961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402502    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398346800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402512    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398679657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402523    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496031526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402533    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496097397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402626    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496109988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402640    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496427740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402650    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.034495755Z" level=info msg="shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	I0728 17:59:16.402661    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.034611180Z" level=info msg="ignoring event" container=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402671    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035089465Z" level=warning msg="cleaning up after shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	I0728 17:59:16.402679    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035158793Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.402690    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.111407350Z" level=info msg="ignoring event" container=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402700    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111763077Z" level=info msg="shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	I0728 17:59:16.402710    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111847732Z" level=warning msg="cleaning up after shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	I0728 17:59:16.402723    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111857207Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.402741    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.123414689Z" level=warning msg="cleanup warnings time=\"2024-07-29T00:58:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0728 17:59:16.402749    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.395458525Z" level=info msg="Processing signal 'terminated'"
	I0728 17:59:16.402760    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	I0728 17:59:16.402770    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448770229Z" level=info msg="shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	I0728 17:59:16.402780    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448865323Z" level=warning msg="cleaning up after shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	I0728 17:59:16.402788    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448875148Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.402799    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.449287739Z" level=info msg="ignoring event" container=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402813    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.499547099Z" level=info msg="ignoring event" container=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402822    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.499966665Z" level=info msg="shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	I0728 17:59:16.402832    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500680178Z" level=warning msg="cleaning up after shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	I0728 17:59:16.403003    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500689740Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403018    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.552833990Z" level=info msg="ignoring event" container=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403028    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553672267Z" level=info msg="shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	I0728 17:59:16.403038    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553743408Z" level=warning msg="cleaning up after shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	I0728 17:59:16.403046    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553752377Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403056    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553855742Z" level=info msg="shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	I0728 17:59:16.403066    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554337023Z" level=warning msg="cleaning up after shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	I0728 17:59:16.403081    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554382869Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403094    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.554596147Z" level=info msg="ignoring event" container=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403108    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.558112577Z" level=info msg="ignoring event" container=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403118    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558368677Z" level=info msg="shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	I0728 17:59:16.403129    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558831783Z" level=warning msg="cleaning up after shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	I0728 17:59:16.403140    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558877595Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403155    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.562511968Z" level=info msg="ignoring event" container=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403164    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562633349Z" level=info msg="shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	I0728 17:59:16.403175    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562699850Z" level=warning msg="cleaning up after shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	I0728 17:59:16.403183    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562708631Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403198    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.573772031Z" level=info msg="ignoring event" container=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403207    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574181868Z" level=info msg="shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	I0728 17:59:16.403218    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574256709Z" level=warning msg="cleaning up after shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	I0728 17:59:16.403226    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574265704Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403235    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584212617Z" level=info msg="shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	I0728 17:59:16.403247    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584332022Z" level=warning msg="cleaning up after shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	I0728 17:59:16.403255    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584390716Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403266    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589054926Z" level=info msg="ignoring event" container=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403278    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589169542Z" level=info msg="ignoring event" container=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403294    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589300211Z" level=info msg="ignoring event" container=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403304    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591696979Z" level=info msg="shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	I0728 17:59:16.403314    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591753738Z" level=warning msg="cleaning up after shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	I0728 17:59:16.403322    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591762049Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403333    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.592142540Z" level=info msg="ignoring event" container=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403342    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.593743099Z" level=info msg="shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	I0728 17:59:16.403356    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.594556393Z" level=info msg="ignoring event" container=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403368    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594895783Z" level=warning msg="cleaning up after shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	I0728 17:59:16.403376    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594940013Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403386    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594341936Z" level=info msg="shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	I0728 17:59:16.403396    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599531022Z" level=warning msg="cleaning up after shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	I0728 17:59:16.403405    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599564549Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403492    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594363171Z" level=info msg="shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	I0728 17:59:16.403510    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603697728Z" level=warning msg="cleaning up after shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	I0728 17:59:16.403517    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603706128Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403528    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1271]: time="2024-07-29T00:58:10.446248538Z" level=info msg="ignoring event" container=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403538    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446453571Z" level=info msg="shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	I0728 17:59:16.403548    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446483266Z" level=warning msg="cleaning up after shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	I0728 17:59:16.403555    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446489626Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403572    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.437850835Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924
	I0728 17:59:16.403584    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.461680643Z" level=info msg="ignoring event" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403593    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462134272Z" level=info msg="shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	I0728 17:59:16.403604    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462432578Z" level=warning msg="cleaning up after shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	I0728 17:59:16.403611    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462709085Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403621    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.480818399Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 17:59:16.403628    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481284133Z" level=info msg="Daemon shutdown complete"
	I0728 17:59:16.403638    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481351043Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 17:59:16.403648    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481513507Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 17:59:16.403658    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	I0728 17:59:16.403666    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	I0728 17:59:16.403673    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Consumed 2.317s CPU time.
	I0728 17:59:16.403686    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.403696    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 dockerd[3649]: time="2024-07-29T00:58:16.519764667Z" level=info msg="Starting up"
	I0728 17:59:16.403704    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 dockerd[3649]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0728 17:59:16.403716    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0728 17:59:16.403721    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0728 17:59:16.403735    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 systemd[1]: Failed to start Docker Application Container Engine.
	I0728 17:59:16.437925    2067 out.go:177] 
	W0728 17:59:16.458779    2067 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 00:57:13 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797365474Z" level=info msg="Starting up"
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797812498Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.799746278Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.817209839Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833006693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833027623Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833063048Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833073437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833127019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833187696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833331655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833366436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833378117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833385070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833441900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833582244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835042594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835101927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835241609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835284736Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835372957Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835438009Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837622113Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837721038Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837768434Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837808041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837840429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837936427Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838141537Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838308394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838347183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838384605Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838419232Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838451200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838482769Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838513376Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838546249Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838577148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838606171Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838634886Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838675799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838712449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838744137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838773905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838803063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838838392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838872381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838902742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838935507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838966734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838994870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839022479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839050538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839129561Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839170342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839201357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839229605Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839300959Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839344419Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839377180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839407452Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839436175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839464659Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839492819Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839668472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839754400Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839823157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839861606Z" level=info msg="containerd successfully booted in 0.023368s"
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.840311727Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.846796524Z" level=info msg="Loading containers: start."
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.931863378Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.016652031Z" level=info msg="Loading containers: done."
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023601347Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023702083Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056431503Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:15 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056529625Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.221309736Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:57:16 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222558264Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222867738Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222936309Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222951150Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:57:17 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:57:17 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:57:17 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251533872Z" level=info msg="Starting up"
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251992238Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.252592079Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=921
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.268000022Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283126898Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283245051Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283296543Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283329167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283372267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283410007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283528327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283565809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283595793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283624050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283661411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283760929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285373046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285426942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285565612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285609205Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285647249Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285681508Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285827566Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285877187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285910515Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285942139Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285973140Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286024088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286256555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286331375Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286365544Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286394955Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286424527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286453657Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286484741Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286516234Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286546601Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286579857Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286611348Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286641030Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286674739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286706453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286744971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286779178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286808354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286841128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286870616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286899451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286928600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286965950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286999059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287027761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287057255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287089564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287124670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287221056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287260008Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287333254Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287377987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287446465Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287477602Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287506315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287535151Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287565710Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287745237Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287832539Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287924952Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287968311Z" level=info msg="containerd successfully booted in 0.020373s"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.331881234Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.335683791Z" level=info msg="Loading containers: start."
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.404366470Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.461547560Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.503511121Z" level=info msg="Loading containers: done."
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521014736Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521083688Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.540963112Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:18 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.542092231Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.000429486Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001308281Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001458767Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001520154Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001554783Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:57:23 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:57:24 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:57:24 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:57:24 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.044513551Z" level=info msg="Starting up"
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045165961Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045779957Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1278
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.063819849Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078790454Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078861840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078909723Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078942873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078982590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079016511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079177290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079221517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079256669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079285006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079322780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079417461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.080975138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081019961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081189849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081230906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081268915Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081307449Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081514588Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081566132Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081599424Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081630245Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081660433Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081711134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081935254Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082003682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082071378Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082106832Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082141456Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082171351Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082199983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082230279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082259644Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082288397Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082316493Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082344152Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082389242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082427480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082458087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082487933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082526801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082561143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082590891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082620127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082660502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082695658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082725026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082756282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082785403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082815558Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082849349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082880362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082908909Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082981072Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083071337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083112046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083141558Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083173553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083204127Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083234220Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083428164Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083514894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083575557Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083620565Z" level=info msg="containerd successfully booted in 0.020314s"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.066266767Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.253647977Z" level=info msg="Loading containers: start."
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.324491630Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.382701703Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.431702433Z" level=info msg="Loading containers: done."
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440864156Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440919518Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461512437Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461664145Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:25 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260281303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260392108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260412572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260489352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276138579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276301037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276372584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276521849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.306891402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307066345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307094251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307168510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311048212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311102810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311112372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311392763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477710685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477915589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477973011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.478174177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494763986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494800644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494808461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494862529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502898043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502995270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503073968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503177666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514475802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514545542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514558720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514861602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352521512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352642496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352656093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352791637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466457350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466735785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466880396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.467238809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.588902278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589163604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589274541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589440546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647495237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647976971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648164904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648777321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931384339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931493404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931506590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931657800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162455309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162701812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162759021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.163278524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398231755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398332961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398346800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398679657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496031526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496097397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496109988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496427740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.034495755Z" level=info msg="shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.034611180Z" level=info msg="ignoring event" container=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035089465Z" level=warning msg="cleaning up after shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035158793Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.111407350Z" level=info msg="ignoring event" container=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111763077Z" level=info msg="shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111847732Z" level=warning msg="cleaning up after shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111857207Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.123414689Z" level=warning msg="cleanup warnings time=\"2024-07-29T00:58:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.395458525Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:58:05 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448770229Z" level=info msg="shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448865323Z" level=warning msg="cleaning up after shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448875148Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.449287739Z" level=info msg="ignoring event" container=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.499547099Z" level=info msg="ignoring event" container=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.499966665Z" level=info msg="shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500680178Z" level=warning msg="cleaning up after shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500689740Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.552833990Z" level=info msg="ignoring event" container=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553672267Z" level=info msg="shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553743408Z" level=warning msg="cleaning up after shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553752377Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553855742Z" level=info msg="shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554337023Z" level=warning msg="cleaning up after shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554382869Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.554596147Z" level=info msg="ignoring event" container=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.558112577Z" level=info msg="ignoring event" container=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558368677Z" level=info msg="shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558831783Z" level=warning msg="cleaning up after shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558877595Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.562511968Z" level=info msg="ignoring event" container=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562633349Z" level=info msg="shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562699850Z" level=warning msg="cleaning up after shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562708631Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.573772031Z" level=info msg="ignoring event" container=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574181868Z" level=info msg="shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574256709Z" level=warning msg="cleaning up after shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574265704Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584212617Z" level=info msg="shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584332022Z" level=warning msg="cleaning up after shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584390716Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589054926Z" level=info msg="ignoring event" container=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589169542Z" level=info msg="ignoring event" container=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589300211Z" level=info msg="ignoring event" container=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591696979Z" level=info msg="shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591753738Z" level=warning msg="cleaning up after shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591762049Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.592142540Z" level=info msg="ignoring event" container=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.593743099Z" level=info msg="shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.594556393Z" level=info msg="ignoring event" container=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594895783Z" level=warning msg="cleaning up after shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594940013Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594341936Z" level=info msg="shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599531022Z" level=warning msg="cleaning up after shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599564549Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594363171Z" level=info msg="shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603697728Z" level=warning msg="cleaning up after shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603706128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1271]: time="2024-07-29T00:58:10.446248538Z" level=info msg="ignoring event" container=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446453571Z" level=info msg="shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446483266Z" level=warning msg="cleaning up after shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446489626Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.437850835Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.461680643Z" level=info msg="ignoring event" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462134272Z" level=info msg="shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462432578Z" level=warning msg="cleaning up after shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462709085Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.480818399Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481284133Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481351043Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481513507Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:58:16 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Consumed 2.317s CPU time.
	Jul 29 00:58:16 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:58:16 functional-596000 dockerd[3649]: time="2024-07-29T00:58:16.519764667Z" level=info msg="Starting up"
	Jul 29 00:59:16 functional-596000 dockerd[3649]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 00:59:16 functional-596000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0728 17:59:16.459445    2067 out.go:239] * 
	W0728 17:59:16.460660    2067 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 17:59:16.543445    2067 out.go:177] 
	
	
	==> Docker <==
	Jul 29 00:59:16 functional-596000 dockerd[3854]: time="2024-07-29T00:59:16.678050055Z" level=info msg="Starting up"
	Jul 29 01:00:16 functional-596000 dockerd[3854]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 01:00:16 functional-596000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="error getting RW layer size for container ID '1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed'"
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="error getting RW layer size for container ID '019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="Set backoffDuration to : 1m0s for container ID '019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf'"
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="error getting RW layer size for container ID 'cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634'"
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="error getting RW layer size for container ID 'fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:00:16 functional-596000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6'"
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="error getting RW layer size for container ID '411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="Set backoffDuration to : 1m0s for container ID '411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb'"
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="error getting RW layer size for container ID 'c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924'"
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="error getting RW layer size for container ID 'dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5'"
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 29 01:00:16 functional-596000 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="error getting RW layer size for container ID '15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:00:16 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:00:16Z" level=error msg="Set backoffDuration to : 1m0s for container ID '15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad'"
	Jul 29 01:00:16 functional-596000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jul 29 01:00:16 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 01:00:16 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-29T01:00:18Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.106896] systemd-fstab-generator[506]: Ignoring "noauto" option for root device
	[  +1.900061] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.307280] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.095207] systemd-fstab-generator[893]: Ignoring "noauto" option for root device
	[  +0.062017] kauditd_printk_skb: 117 callbacks suppressed
	[  +0.071501] systemd-fstab-generator[907]: Ignoring "noauto" option for root device
	[  +2.464238] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +0.103266] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.116452] systemd-fstab-generator[1145]: Ignoring "noauto" option for root device
	[  +0.130252] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +3.974695] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	[  +0.052634] kauditd_printk_skb: 158 callbacks suppressed
	[  +2.632602] systemd-fstab-generator[1511]: Ignoring "noauto" option for root device
	[  +4.717931] systemd-fstab-generator[1694]: Ignoring "noauto" option for root device
	[  +0.052232] kauditd_printk_skb: 70 callbacks suppressed
	[  +4.965900] systemd-fstab-generator[2101]: Ignoring "noauto" option for root device
	[  +0.068473] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.556217] systemd-fstab-generator[2344]: Ignoring "noauto" option for root device
	[  +0.144175] kauditd_printk_skb: 12 callbacks suppressed
	[Jul29 00:58] kauditd_printk_skb: 98 callbacks suppressed
	[  +3.703331] systemd-fstab-generator[3180]: Ignoring "noauto" option for root device
	[  +0.280018] systemd-fstab-generator[3216]: Ignoring "noauto" option for root device
	[  +0.136220] systemd-fstab-generator[3228]: Ignoring "noauto" option for root device
	[  +0.135284] systemd-fstab-generator[3242]: Ignoring "noauto" option for root device
	[  +5.159757] kauditd_printk_skb: 101 callbacks suppressed
	
	
	==> kernel <==
	 01:01:17 up 4 min,  0 users,  load average: 0.03, 0.11, 0.06
	Linux functional-596000 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 01:01:09 functional-596000 kubelet[2108]: E0729 01:01:09.677625    2108 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-596000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-596000?resourceVersion=0&timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:01:09 functional-596000 kubelet[2108]: E0729 01:01:09.678060    2108 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-596000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-596000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:01:09 functional-596000 kubelet[2108]: E0729 01:01:09.678476    2108 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-596000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-596000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:01:09 functional-596000 kubelet[2108]: E0729 01:01:09.678854    2108 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-596000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-596000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:01:09 functional-596000 kubelet[2108]: E0729 01:01:09.679159    2108 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-596000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-596000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:01:09 functional-596000 kubelet[2108]: E0729 01:01:09.679238    2108 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 29 01:01:12 functional-596000 kubelet[2108]: E0729 01:01:12.896626    2108 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m7.886011926s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 29 01:01:13 functional-596000 kubelet[2108]: E0729 01:01:13.921864    2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-596000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused" interval="7s"
	Jul 29 01:01:15 functional-596000 kubelet[2108]: I0729 01:01:15.543485    2108 status_manager.go:853] "Failed to get status for pod" podUID="471ce4342a500a995eaa994abbd56071" pod="kube-system/kube-apiserver-functional-596000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-596000\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:01:16 functional-596000 kubelet[2108]: E0729 01:01:16.869888    2108 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 29 01:01:16 functional-596000 kubelet[2108]: E0729 01:01:16.869921    2108 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:01:16 functional-596000 kubelet[2108]: E0729 01:01:16.870263    2108 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:01:16 functional-596000 kubelet[2108]: E0729 01:01:16.870282    2108 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:01:16 functional-596000 kubelet[2108]: E0729 01:01:16.870392    2108 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 29 01:01:16 functional-596000 kubelet[2108]: E0729 01:01:16.870412    2108 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:01:16 functional-596000 kubelet[2108]: E0729 01:01:16.870442    2108 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 29 01:01:16 functional-596000 kubelet[2108]: E0729 01:01:16.870458    2108 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:01:16 functional-596000 kubelet[2108]: E0729 01:01:16.870466    2108 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:01:16 functional-596000 kubelet[2108]: E0729 01:01:16.870530    2108 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 29 01:01:16 functional-596000 kubelet[2108]: E0729 01:01:16.870573    2108 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 29 01:01:16 functional-596000 kubelet[2108]: E0729 01:01:16.870585    2108 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:01:16 functional-596000 kubelet[2108]: I0729 01:01:16.870592    2108 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:01:16 functional-596000 kubelet[2108]: E0729 01:01:16.870854    2108 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 29 01:01:16 functional-596000 kubelet[2108]: E0729 01:01:16.870898    2108 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 29 01:01:16 functional-596000 kubelet[2108]: E0729 01:01:16.871216    2108 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 18:00:16.512347    2089 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:00:16.524981    2089 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:00:16.535854    2089 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:00:16.546194    2089 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:00:16.558725    2089 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:00:16.569899    2089 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:00:16.582309    2089 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:00:16.594271    2089 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-596000 -n functional-596000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-596000 -n functional-596000: exit status 2 (151.342104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-596000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (194.26s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (120.38s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-596000 get po -A
E0728 18:01:17.706474    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-596000 get po -A: exit status 1 (542.68018ms)

                                                
                                                
** stderr ** 
	E0728 18:01:17.570471    2379 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	E0728 18:01:17.670733    2379 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	E0728 18:01:17.770789    2379 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	E0728 18:01:17.870875    2379 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	E0728 18:01:17.970969    2379 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	The connection to the server 192.169.0.4:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-596000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"E0728 18:01:17.570471    2379 memcache.go:265] couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused\nE0728 18:01:17.670733    2379 memcache.go:265] couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused\nE0728 18:01:17.770789    2379 memcache.go:265] couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused\nE0728 18:01:17.870875    2379 memcache.go:265] couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused\nE0728 18:01:17.970969    2379 memcache.go:265] couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: co
nnect: connection refused\nThe connection to the server 192.169.0.4:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-596000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-596000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-596000 -n functional-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-596000 -n functional-596000: exit status 2 (140.374762ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-596000 logs -n 25: (1m59.493002406s)
helpers_test.go:252: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | enable headlamp                                                                             | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:52 PDT | 28 Jul 24 17:52 PDT |
	|         | -p addons-967000                                                                            |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                   |         |         |                     |                     |
	| ssh     | addons-967000 ssh cat                                                                       | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:52 PDT | 28 Jul 24 17:52 PDT |
	|         | /opt/local-path-provisioner/pvc-763f0b3f-3a84-408e-988e-e89dc26ea2ee_default_test-pvc/file1 |                   |         |         |                     |                     |
	| addons  | addons-967000 addons disable                                                                | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:52 PDT | 28 Jul 24 17:53 PDT |
	|         | storage-provisioner-rancher                                                                 |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                   |         |         |                     |                     |
	| addons  | addons-967000 addons disable                                                                | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:53 PDT | 28 Jul 24 17:53 PDT |
	|         | headlamp --alsologtostderr                                                                  |                   |         |         |                     |                     |
	|         | -v=1                                                                                        |                   |         |         |                     |                     |
	| stop    | -p addons-967000                                                                            | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:53 PDT | 28 Jul 24 17:53 PDT |
	| addons  | enable dashboard -p                                                                         | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:53 PDT | 28 Jul 24 17:53 PDT |
	|         | addons-967000                                                                               |                   |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:53 PDT | 28 Jul 24 17:53 PDT |
	|         | addons-967000                                                                               |                   |         |         |                     |                     |
	| addons  | disable gvisor -p                                                                           | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:53 PDT | 28 Jul 24 17:53 PDT |
	|         | addons-967000                                                                               |                   |         |         |                     |                     |
	| delete  | -p addons-967000                                                                            | addons-967000     | jenkins | v1.33.1 | 28 Jul 24 17:53 PDT | 28 Jul 24 17:53 PDT |
	| start   | -p nospam-292000 -n=1 --memory=2250 --wait=false                                            | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:53 PDT | 28 Jul 24 17:54 PDT |
	|         | --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                    |                   |         |         |                     |                     |
	|         | --driver=hyperkit                                                                           |                   |         |         |                     |                     |
	| start   | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT |                     |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | start --dry-run                                                                             |                   |         |         |                     |                     |
	| start   | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT |                     |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | start --dry-run                                                                             |                   |         |         |                     |                     |
	| start   | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT |                     |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | start --dry-run                                                                             |                   |         |         |                     |                     |
	| pause   | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | pause                                                                                       |                   |         |         |                     |                     |
	| pause   | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | pause                                                                                       |                   |         |         |                     |                     |
	| pause   | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | pause                                                                                       |                   |         |         |                     |                     |
	| unpause | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | unpause                                                                                     |                   |         |         |                     |                     |
	| unpause | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | unpause                                                                                     |                   |         |         |                     |                     |
	| unpause | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | unpause                                                                                     |                   |         |         |                     |                     |
	| stop    | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | stop                                                                                        |                   |         |         |                     |                     |
	| stop    | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:55 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | stop                                                                                        |                   |         |         |                     |                     |
	| stop    | nospam-292000 --log_dir                                                                     | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:55 PDT | 28 Jul 24 17:56 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000                              |                   |         |         |                     |                     |
	|         | stop                                                                                        |                   |         |         |                     |                     |
	| delete  | -p nospam-292000                                                                            | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:57 PDT |
	| start   | -p functional-596000                                                                        | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:58 PDT |
	|         | --memory=4000                                                                               |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                                       |                   |         |         |                     |                     |
	|         | --wait=all --driver=hyperkit                                                                |                   |         |         |                     |                     |
	| start   | -p functional-596000                                                                        | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 17:58 PDT |                     |
	|         | --alsologtostderr -v=8                                                                      |                   |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/28 17:58:03
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 17:58:03.181908    2067 out.go:291] Setting OutFile to fd 1 ...
	I0728 17:58:03.182088    2067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:58:03.182094    2067 out.go:304] Setting ErrFile to fd 2...
	I0728 17:58:03.182098    2067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:58:03.182279    2067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 17:58:03.183681    2067 out.go:298] Setting JSON to false
	I0728 17:58:03.206318    2067 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1654,"bootTime":1722213029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 17:58:03.206416    2067 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 17:58:03.227676    2067 out.go:177] * [functional-596000] minikube v1.33.1 on Darwin 14.5
	I0728 17:58:03.269722    2067 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 17:58:03.269783    2067 notify.go:220] Checking for updates...
	I0728 17:58:03.312443    2067 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 17:58:03.333527    2067 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 17:58:03.354627    2067 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 17:58:03.375824    2067 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 17:58:03.396566    2067 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 17:58:03.417974    2067 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 17:58:03.418146    2067 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 17:58:03.418798    2067 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:58:03.418872    2067 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 17:58:03.428211    2067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50175
	I0728 17:58:03.428568    2067 main.go:141] libmachine: () Calling .GetVersion
	I0728 17:58:03.428964    2067 main.go:141] libmachine: Using API Version  1
	I0728 17:58:03.428979    2067 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 17:58:03.429182    2067 main.go:141] libmachine: () Calling .GetMachineName
	I0728 17:58:03.429300    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:03.457784    2067 out.go:177] * Using the hyperkit driver based on existing profile
	I0728 17:58:03.499269    2067 start.go:297] selected driver: hyperkit
	I0728 17:58:03.499285    2067 start.go:901] validating driver "hyperkit" against &{Name:functional-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:58:03.499388    2067 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 17:58:03.499488    2067 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:58:03.499604    2067 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 17:58:03.508339    2067 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 17:58:03.512503    2067 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:58:03.512529    2067 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 17:58:03.515340    2067 cni.go:84] Creating CNI manager for ""
	I0728 17:58:03.515390    2067 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 17:58:03.515469    2067 start.go:340] cluster config:
	{Name:functional-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-596000 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:58:03.515565    2067 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:58:03.573559    2067 out.go:177] * Starting "functional-596000" primary control-plane node in "functional-596000" cluster
	I0728 17:58:03.610472    2067 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 17:58:03.610521    2067 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 17:58:03.610545    2067 cache.go:56] Caching tarball of preloaded images
	I0728 17:58:03.610741    2067 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 17:58:03.610759    2067 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 17:58:03.610882    2067 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/config.json ...
	I0728 17:58:03.611579    2067 start.go:360] acquireMachinesLock for functional-596000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 17:58:03.611656    2067 start.go:364] duration metric: took 61.959µs to acquireMachinesLock for "functional-596000"
	I0728 17:58:03.611681    2067 start.go:96] Skipping create...Using existing machine configuration
	I0728 17:58:03.611696    2067 fix.go:54] fixHost starting: 
	I0728 17:58:03.612004    2067 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:58:03.612033    2067 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 17:58:03.621321    2067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50177
	I0728 17:58:03.621639    2067 main.go:141] libmachine: () Calling .GetVersion
	I0728 17:58:03.622002    2067 main.go:141] libmachine: Using API Version  1
	I0728 17:58:03.622022    2067 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 17:58:03.622230    2067 main.go:141] libmachine: () Calling .GetMachineName
	I0728 17:58:03.622342    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:03.622436    2067 main.go:141] libmachine: (functional-596000) Calling .GetState
	I0728 17:58:03.622567    2067 main.go:141] libmachine: (functional-596000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 17:58:03.622651    2067 main.go:141] libmachine: (functional-596000) DBG | hyperkit pid from json: 2051
	I0728 17:58:03.623593    2067 fix.go:112] recreateIfNeeded on functional-596000: state=Running err=<nil>
	W0728 17:58:03.623608    2067 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 17:58:03.644584    2067 out.go:177] * Updating the running hyperkit "functional-596000" VM ...
	I0728 17:58:03.686410    2067 machine.go:94] provisionDockerMachine start ...
	I0728 17:58:03.686443    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:03.686748    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.686992    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.687220    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.687442    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.687672    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.687922    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:03.688298    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:03.688318    2067 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 17:58:03.737887    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-596000
	
	I0728 17:58:03.737901    2067 main.go:141] libmachine: (functional-596000) Calling .GetMachineName
	I0728 17:58:03.738050    2067 buildroot.go:166] provisioning hostname "functional-596000"
	I0728 17:58:03.738062    2067 main.go:141] libmachine: (functional-596000) Calling .GetMachineName
	I0728 17:58:03.738158    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.738247    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.738335    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.738433    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.738522    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.738660    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:03.738789    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:03.738804    2067 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-596000 && echo "functional-596000" | sudo tee /etc/hostname
	I0728 17:58:03.799001    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-596000
	
	I0728 17:58:03.799026    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.799176    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.799262    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.799342    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.799457    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.799594    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:03.799743    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:03.799755    2067 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-596000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-596000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-596000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 17:58:03.848940    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 17:58:03.848963    2067 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 17:58:03.848979    2067 buildroot.go:174] setting up certificates
	I0728 17:58:03.848994    2067 provision.go:84] configureAuth start
	I0728 17:58:03.849001    2067 main.go:141] libmachine: (functional-596000) Calling .GetMachineName
	I0728 17:58:03.849120    2067 main.go:141] libmachine: (functional-596000) Calling .GetIP
	I0728 17:58:03.849210    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.849295    2067 provision.go:143] copyHostCerts
	I0728 17:58:03.849323    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 17:58:03.849389    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 17:58:03.849397    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 17:58:03.849587    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 17:58:03.849823    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 17:58:03.849865    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 17:58:03.849873    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 17:58:03.850017    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 17:58:03.850186    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 17:58:03.850225    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 17:58:03.850230    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 17:58:03.850308    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 17:58:03.850449    2067 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.functional-596000 san=[127.0.0.1 192.169.0.4 functional-596000 localhost minikube]
	I0728 17:58:03.967853    2067 provision.go:177] copyRemoteCerts
	I0728 17:58:03.967921    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 17:58:03.967939    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.968094    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.968192    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.968299    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.968393    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.001708    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 17:58:04.001790    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 17:58:04.022827    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 17:58:04.022891    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0728 17:58:04.042748    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 17:58:04.042810    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 17:58:04.062503    2067 provision.go:87] duration metric: took 213.493856ms to configureAuth
	I0728 17:58:04.062518    2067 buildroot.go:189] setting minikube options for container-runtime
	I0728 17:58:04.062657    2067 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 17:58:04.062674    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.062814    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.062907    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.062999    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.063076    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.063159    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.063261    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.063390    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.063398    2067 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 17:58:04.115857    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 17:58:04.115869    2067 buildroot.go:70] root file system type: tmpfs
	I0728 17:58:04.115942    2067 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 17:58:04.115956    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.116086    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.116177    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.116266    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.116359    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.116490    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.116628    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.116676    2067 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 17:58:04.180807    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 17:58:04.180831    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.180961    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.181052    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.181141    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.181233    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.181369    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.181514    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.181526    2067 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 17:58:04.236936    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 17:58:04.236950    2067 machine.go:97] duration metric: took 550.516869ms to provisionDockerMachine
	I0728 17:58:04.236962    2067 start.go:293] postStartSetup for "functional-596000" (driver="hyperkit")
	I0728 17:58:04.236969    2067 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 17:58:04.236980    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.237151    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 17:58:04.237167    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.237259    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.237356    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.237450    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.237524    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.269248    2067 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 17:58:04.272370    2067 command_runner.go:130] > NAME=Buildroot
	I0728 17:58:04.272378    2067 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0728 17:58:04.272381    2067 command_runner.go:130] > ID=buildroot
	I0728 17:58:04.272385    2067 command_runner.go:130] > VERSION_ID=2023.02.9
	I0728 17:58:04.272389    2067 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0728 17:58:04.272475    2067 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 17:58:04.272491    2067 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 17:58:04.272591    2067 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 17:58:04.272782    2067 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 17:58:04.272789    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /etc/ssl/certs/15332.pem
	I0728 17:58:04.272981    2067 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/test/nested/copy/1533/hosts -> hosts in /etc/test/nested/copy/1533
	I0728 17:58:04.272987    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/test/nested/copy/1533/hosts -> /etc/test/nested/copy/1533/hosts
	I0728 17:58:04.273049    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1533
	I0728 17:58:04.281301    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 17:58:04.301144    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/test/nested/copy/1533/hosts --> /etc/test/nested/copy/1533/hosts (40 bytes)
	I0728 17:58:04.321194    2067 start.go:296] duration metric: took 84.223294ms for postStartSetup
	I0728 17:58:04.321219    2067 fix.go:56] duration metric: took 709.52621ms for fixHost
	I0728 17:58:04.321235    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.321378    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.321458    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.321552    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.321634    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.321767    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.321915    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.321922    2067 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0728 17:58:04.372672    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722214684.480661733
	
	I0728 17:58:04.372686    2067 fix.go:216] guest clock: 1722214684.480661733
	I0728 17:58:04.372691    2067 fix.go:229] Guest: 2024-07-28 17:58:04.480661733 -0700 PDT Remote: 2024-07-28 17:58:04.321226 -0700 PDT m=+1.173910037 (delta=159.435733ms)
	I0728 17:58:04.372708    2067 fix.go:200] guest clock delta is within tolerance: 159.435733ms
	I0728 17:58:04.372712    2067 start.go:83] releasing machines lock for "functional-596000", held for 761.044153ms
	I0728 17:58:04.372731    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.372854    2067 main.go:141] libmachine: (functional-596000) Calling .GetIP
	I0728 17:58:04.372965    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.373253    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.373372    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.373450    2067 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 17:58:04.373485    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.373513    2067 ssh_runner.go:195] Run: cat /version.json
	I0728 17:58:04.373523    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.373581    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.373615    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.373688    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.373706    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.373784    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.373796    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.373868    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.373891    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.444486    2067 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0728 17:58:04.445070    2067 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0728 17:58:04.445228    2067 ssh_runner.go:195] Run: systemctl --version
	I0728 17:58:04.449759    2067 command_runner.go:130] > systemd 252 (252)
	I0728 17:58:04.449776    2067 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0728 17:58:04.450022    2067 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0728 17:58:04.454258    2067 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0728 17:58:04.454279    2067 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 17:58:04.454319    2067 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 17:58:04.462388    2067 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0728 17:58:04.462398    2067 start.go:495] detecting cgroup driver to use...
	I0728 17:58:04.462514    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 17:58:04.477917    2067 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0728 17:58:04.478151    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 17:58:04.487863    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 17:58:04.497357    2067 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 17:58:04.497404    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 17:58:04.507132    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 17:58:04.516475    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 17:58:04.526165    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 17:58:04.535504    2067 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 17:58:04.545511    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 17:58:04.554731    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 17:58:04.563973    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 17:58:04.573675    2067 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 17:58:04.582020    2067 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0728 17:58:04.582227    2067 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 17:58:04.591135    2067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 17:58:04.729887    2067 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 17:58:04.749030    2067 start.go:495] detecting cgroup driver to use...
	I0728 17:58:04.749107    2067 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 17:58:04.763070    2067 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0728 17:58:04.763645    2067 command_runner.go:130] > [Unit]
	I0728 17:58:04.763655    2067 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 17:58:04.763659    2067 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 17:58:04.763664    2067 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0728 17:58:04.763668    2067 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0728 17:58:04.763673    2067 command_runner.go:130] > StartLimitBurst=3
	I0728 17:58:04.763676    2067 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 17:58:04.763680    2067 command_runner.go:130] > [Service]
	I0728 17:58:04.763686    2067 command_runner.go:130] > Type=notify
	I0728 17:58:04.763691    2067 command_runner.go:130] > Restart=on-failure
	I0728 17:58:04.763696    2067 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 17:58:04.763711    2067 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 17:58:04.763718    2067 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 17:58:04.763723    2067 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 17:58:04.763729    2067 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 17:58:04.763734    2067 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 17:58:04.763741    2067 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 17:58:04.763754    2067 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 17:58:04.763760    2067 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 17:58:04.763763    2067 command_runner.go:130] > ExecStart=
	I0728 17:58:04.763777    2067 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0728 17:58:04.763782    2067 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 17:58:04.763788    2067 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 17:58:04.763795    2067 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 17:58:04.763798    2067 command_runner.go:130] > LimitNOFILE=infinity
	I0728 17:58:04.763802    2067 command_runner.go:130] > LimitNPROC=infinity
	I0728 17:58:04.763807    2067 command_runner.go:130] > LimitCORE=infinity
	I0728 17:58:04.763811    2067 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 17:58:04.763815    2067 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 17:58:04.763824    2067 command_runner.go:130] > TasksMax=infinity
	I0728 17:58:04.763828    2067 command_runner.go:130] > TimeoutStartSec=0
	I0728 17:58:04.763833    2067 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 17:58:04.763837    2067 command_runner.go:130] > Delegate=yes
	I0728 17:58:04.763842    2067 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 17:58:04.763846    2067 command_runner.go:130] > KillMode=process
	I0728 17:58:04.763849    2067 command_runner.go:130] > [Install]
	I0728 17:58:04.763857    2067 command_runner.go:130] > WantedBy=multi-user.target
	I0728 17:58:04.763963    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 17:58:04.775171    2067 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 17:58:04.803670    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 17:58:04.815918    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 17:58:04.827728    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 17:58:04.842925    2067 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 17:58:04.843170    2067 ssh_runner.go:195] Run: which cri-dockerd
	I0728 17:58:04.846059    2067 command_runner.go:130] > /usr/bin/cri-dockerd
	I0728 17:58:04.846245    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 17:58:04.854364    2067 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 17:58:04.868292    2067 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 17:58:05.006256    2067 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 17:58:05.135902    2067 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 17:58:05.135971    2067 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 17:58:05.150351    2067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 17:58:05.274841    2067 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 17:59:16.388765    2067 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0728 17:59:16.388780    2067 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0728 17:59:16.388791    2067 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.113588859s)
	I0728 17:59:16.388851    2067 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0728 17:59:16.398150    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.398166    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797365474Z" level=info msg="Starting up"
	I0728 17:59:16.398196    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797812498Z" level=info msg="containerd not running, starting managed containerd"
	I0728 17:59:16.398214    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.799746278Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	I0728 17:59:16.398223    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.817209839Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 17:59:16.398235    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833006693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 17:59:16.398246    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833027623Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 17:59:16.398255    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833063048Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 17:59:16.398264    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833073437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398274    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833127019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398283    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833187696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398302    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833331655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398312    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833366436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398323    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833378117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398332    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833385070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398342    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833441900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398350    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833582244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398364    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835042594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398374    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835101927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398432    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835241609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398446    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835284736Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 17:59:16.398456    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835372957Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 17:59:16.398464    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835438009Z" level=info msg="metadata content store policy set" policy=shared
	I0728 17:59:16.398472    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837622113Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 17:59:16.398481    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837721038Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 17:59:16.398490    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837768434Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 17:59:16.398500    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837808041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 17:59:16.398509    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837840429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 17:59:16.398518    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837936427Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 17:59:16.398527    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838141537Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.398536    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838308394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.398544    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838347183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 17:59:16.398554    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838384605Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 17:59:16.398566    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838419232Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398576    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838451200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398585    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838482769Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398594    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838513376Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398604    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838546249Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398614    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838577148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398624    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838606171Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398900    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838634886Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398913    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838675799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398921    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838712449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398929    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838744137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398938    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838773905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398946    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838803063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398955    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838838392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398963    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838872381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398971    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838902742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398980    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838935507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398994    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838966734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399003    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838994870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399011    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839022479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399019    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839050538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399028    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839129561Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 17:59:16.399037    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839170342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399045    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839201357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399054    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839229605Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 17:59:16.399063    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839300959Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 17:59:16.399075    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839344419Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 17:59:16.399084    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839377180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 17:59:16.399288    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839407452Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 17:59:16.399301    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839436175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399321    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839464659Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 17:59:16.399330    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839492819Z" level=info msg="NRI interface is disabled by configuration."
	I0728 17:59:16.399339    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839668472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 17:59:16.399347    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839754400Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 17:59:16.399355    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839823157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 17:59:16.399363    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839861606Z" level=info msg="containerd successfully booted in 0.023368s"
	I0728 17:59:16.399371    2067 command_runner.go:130] > Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.840311727Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 17:59:16.399378    2067 command_runner.go:130] > Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.846796524Z" level=info msg="Loading containers: start."
	I0728 17:59:16.399399    2067 command_runner.go:130] > Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.931863378Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 17:59:16.399408    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.016652031Z" level=info msg="Loading containers: done."
	I0728 17:59:16.399429    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023601347Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 17:59:16.399457    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023702083Z" level=info msg="Daemon has completed initialization"
	I0728 17:59:16.399464    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056431503Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 17:59:16.399492    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 systemd[1]: Started Docker Application Container Engine.
	I0728 17:59:16.399501    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056529625Z" level=info msg="API listen on [::]:2376"
	I0728 17:59:16.399507    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.221309736Z" level=info msg="Processing signal 'terminated'"
	I0728 17:59:16.399513    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	I0728 17:59:16.399522    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222558264Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 17:59:16.399528    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222867738Z" level=info msg="Daemon shutdown complete"
	I0728 17:59:16.399545    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222936309Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 17:59:16.399553    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222951150Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 17:59:16.399559    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	I0728 17:59:16.399564    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	I0728 17:59:16.399574    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.399581    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251533872Z" level=info msg="Starting up"
	I0728 17:59:16.399696    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251992238Z" level=info msg="containerd not running, starting managed containerd"
	I0728 17:59:16.399709    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.252592079Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=921
	I0728 17:59:16.399718    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.268000022Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 17:59:16.399726    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283126898Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 17:59:16.399735    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283245051Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 17:59:16.399744    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283296543Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 17:59:16.399753    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283329167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399767    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283372267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399777    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283410007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399792    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283528327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399801    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283565809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399812    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283595793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399821    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283624050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399831    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283661411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399840    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283760929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399853    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285373046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399863    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285426942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399876    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285565612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399910    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285609205Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 17:59:16.399925    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285647249Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 17:59:16.399934    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285681508Z" level=info msg="metadata content store policy set" policy=shared
	I0728 17:59:16.399943    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285827566Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 17:59:16.399952    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285877187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 17:59:16.399961    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285910515Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 17:59:16.399969    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285942139Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 17:59:16.399980    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285973140Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 17:59:16.399991    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286024088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 17:59:16.400000    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286256555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.400009    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286331375Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.400021    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286365544Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 17:59:16.400031    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286394955Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 17:59:16.400040    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286424527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400050    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286453657Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400059    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286484741Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400068    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286516234Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400077    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286546601Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400086    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286579857Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400096    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286611348Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400105    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286641030Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400173    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286674739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400185    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286706453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400194    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286744971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400203    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286779178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400216    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286808354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400225    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286841128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400234    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286870616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400243    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286899451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400251    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286928600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400260    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286965950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400269    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286999059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400278    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287027761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400286    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287057255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400295    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287089564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 17:59:16.400304    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287124670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400312    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287221056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400321    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287260008Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 17:59:16.400332    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287333254Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 17:59:16.400344    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287377987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 17:59:16.400354    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287446465Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 17:59:16.400365    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287477602Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 17:59:16.400375    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287506315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400543    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287535151Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 17:59:16.400553    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287565710Z" level=info msg="NRI interface is disabled by configuration."
	I0728 17:59:16.400561    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287745237Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 17:59:16.400572    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287832539Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 17:59:16.400580    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287924952Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 17:59:16.400588    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287968311Z" level=info msg="containerd successfully booted in 0.020373s"
	I0728 17:59:16.400596    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.331881234Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 17:59:16.400604    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.335683791Z" level=info msg="Loading containers: start."
	I0728 17:59:16.400623    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.404366470Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 17:59:16.400634    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.461547560Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0728 17:59:16.400642    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.503511121Z" level=info msg="Loading containers: done."
	I0728 17:59:16.400652    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521014736Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 17:59:16.400659    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521083688Z" level=info msg="Daemon has completed initialization"
	I0728 17:59:16.400669    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.540963112Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 17:59:16.400676    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 systemd[1]: Started Docker Application Container Engine.
	I0728 17:59:16.400683    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.542092231Z" level=info msg="API listen on [::]:2376"
	I0728 17:59:16.400691    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.000429486Z" level=info msg="Processing signal 'terminated'"
	I0728 17:59:16.400701    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001308281Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 17:59:16.400716    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001458767Z" level=info msg="Daemon shutdown complete"
	I0728 17:59:16.400730    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001520154Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 17:59:16.400739    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001554783Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 17:59:16.400746    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	I0728 17:59:16.400751    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	I0728 17:59:16.400757    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	I0728 17:59:16.400763    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.400770    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.044513551Z" level=info msg="Starting up"
	I0728 17:59:16.400830    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045165961Z" level=info msg="containerd not running, starting managed containerd"
	I0728 17:59:16.400840    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045779957Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1278
	I0728 17:59:16.400849    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.063819849Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 17:59:16.400859    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078790454Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 17:59:16.400881    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078861840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 17:59:16.400890    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078909723Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 17:59:16.400899    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078942873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400909    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078982590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.400918    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079016511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400934    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079177290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.400942    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079221517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400956    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079256669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.400968    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079285006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400977    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079322780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400989    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079417461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.401003    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.080975138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.401012    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081019961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.401028    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081189849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.401037    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081230906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 17:59:16.401046    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081268915Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 17:59:16.401054    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081307449Z" level=info msg="metadata content store policy set" policy=shared
	I0728 17:59:16.401063    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081514588Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 17:59:16.401072    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081566132Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 17:59:16.401081    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081599424Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 17:59:16.401092    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081630245Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 17:59:16.401101    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081660433Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 17:59:16.401110    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081711134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 17:59:16.401119    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081935254Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.401131    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082003682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.401140    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082071378Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 17:59:16.401150    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082106832Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 17:59:16.401160    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082141456Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401169    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082171351Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401178    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082199983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401199    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082230279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401209    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082259644Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401218    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082288397Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401228    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082316493Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401241    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082344152Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401289    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082389242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401303    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082427480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401312    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082458087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401322    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082487933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401330    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082526801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401339    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082561143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401348    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082590891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401357    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082620127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401366    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082660502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401376    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082695658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401385    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082725026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401394    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082756282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401403    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082785403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401412    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082815558Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 17:59:16.401420    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082849349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401428    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082880362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401437    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082908909Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 17:59:16.401446    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082981072Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 17:59:16.401460    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083071337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 17:59:16.401481    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083112046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 17:59:16.401492    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083141558Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 17:59:16.401593    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083173553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401606    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083204127Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 17:59:16.401620    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083234220Z" level=info msg="NRI interface is disabled by configuration."
	I0728 17:59:16.401628    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083428164Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 17:59:16.401637    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083514894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 17:59:16.401645    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083575557Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 17:59:16.401653    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083620565Z" level=info msg="containerd successfully booted in 0.020314s"
	I0728 17:59:16.401660    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.066266767Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 17:59:16.401668    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.253647977Z" level=info msg="Loading containers: start."
	I0728 17:59:16.401689    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.324491630Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 17:59:16.401703    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.382701703Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0728 17:59:16.401711    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.431702433Z" level=info msg="Loading containers: done."
	I0728 17:59:16.401721    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440864156Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 17:59:16.401730    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440919518Z" level=info msg="Daemon has completed initialization"
	I0728 17:59:16.401738    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461512437Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 17:59:16.401745    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461664145Z" level=info msg="API listen on [::]:2376"
	I0728 17:59:16.401751    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 systemd[1]: Started Docker Application Container Engine.
	I0728 17:59:16.401760    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260281303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401774    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260392108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401784    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260412572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401794    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260489352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401803    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276138579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401838    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276301037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401853    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276372584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401866    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276521849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401880    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.306891402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401894    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307066345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401904    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307094251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401914    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307168510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401924    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311048212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401938    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311102810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401948    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311112372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401958    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311392763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401968    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477710685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401977    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477915589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401987    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477973011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401997    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.478174177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402013    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494763986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402025    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494800644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402041    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494808461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402054    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494862529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402095    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502898043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402108    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502995270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402118    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503073968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402128    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503177666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402142    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514475802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402152    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514545542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402162    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514558720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402171    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514861602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402181    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352521512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402191    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352642496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402204    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352656093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402214    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352791637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402234    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466457350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402244    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466735785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402254    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466880396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402264    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.467238809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402274    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.588902278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402284    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589163604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402297    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589274541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402342    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589440546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402355    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647495237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402365    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647976971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402374    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648164904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402385    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648777321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402395    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931384339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402404    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931493404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402414    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931506590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402424    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931657800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402434    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162455309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402444    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162701812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402459    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162759021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402469    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.163278524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402481    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398231755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402491    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398332961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402502    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398346800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402512    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398679657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402523    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496031526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402533    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496097397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402626    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496109988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402640    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496427740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402650    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.034495755Z" level=info msg="shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	I0728 17:59:16.402661    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.034611180Z" level=info msg="ignoring event" container=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402671    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035089465Z" level=warning msg="cleaning up after shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	I0728 17:59:16.402679    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035158793Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.402690    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.111407350Z" level=info msg="ignoring event" container=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402700    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111763077Z" level=info msg="shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	I0728 17:59:16.402710    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111847732Z" level=warning msg="cleaning up after shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	I0728 17:59:16.402723    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111857207Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.402741    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.123414689Z" level=warning msg="cleanup warnings time=\"2024-07-29T00:58:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0728 17:59:16.402749    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.395458525Z" level=info msg="Processing signal 'terminated'"
	I0728 17:59:16.402760    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	I0728 17:59:16.402770    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448770229Z" level=info msg="shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	I0728 17:59:16.402780    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448865323Z" level=warning msg="cleaning up after shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	I0728 17:59:16.402788    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448875148Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.402799    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.449287739Z" level=info msg="ignoring event" container=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402813    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.499547099Z" level=info msg="ignoring event" container=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402822    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.499966665Z" level=info msg="shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	I0728 17:59:16.402832    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500680178Z" level=warning msg="cleaning up after shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	I0728 17:59:16.403003    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500689740Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403018    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.552833990Z" level=info msg="ignoring event" container=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403028    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553672267Z" level=info msg="shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	I0728 17:59:16.403038    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553743408Z" level=warning msg="cleaning up after shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	I0728 17:59:16.403046    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553752377Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403056    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553855742Z" level=info msg="shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	I0728 17:59:16.403066    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554337023Z" level=warning msg="cleaning up after shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	I0728 17:59:16.403081    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554382869Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403094    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.554596147Z" level=info msg="ignoring event" container=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403108    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.558112577Z" level=info msg="ignoring event" container=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403118    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558368677Z" level=info msg="shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	I0728 17:59:16.403129    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558831783Z" level=warning msg="cleaning up after shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	I0728 17:59:16.403140    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558877595Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403155    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.562511968Z" level=info msg="ignoring event" container=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403164    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562633349Z" level=info msg="shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	I0728 17:59:16.403175    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562699850Z" level=warning msg="cleaning up after shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	I0728 17:59:16.403183    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562708631Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403198    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.573772031Z" level=info msg="ignoring event" container=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403207    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574181868Z" level=info msg="shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	I0728 17:59:16.403218    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574256709Z" level=warning msg="cleaning up after shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	I0728 17:59:16.403226    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574265704Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403235    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584212617Z" level=info msg="shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	I0728 17:59:16.403247    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584332022Z" level=warning msg="cleaning up after shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	I0728 17:59:16.403255    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584390716Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403266    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589054926Z" level=info msg="ignoring event" container=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403278    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589169542Z" level=info msg="ignoring event" container=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403294    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589300211Z" level=info msg="ignoring event" container=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403304    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591696979Z" level=info msg="shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	I0728 17:59:16.403314    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591753738Z" level=warning msg="cleaning up after shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	I0728 17:59:16.403322    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591762049Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403333    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.592142540Z" level=info msg="ignoring event" container=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403342    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.593743099Z" level=info msg="shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	I0728 17:59:16.403356    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.594556393Z" level=info msg="ignoring event" container=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403368    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594895783Z" level=warning msg="cleaning up after shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	I0728 17:59:16.403376    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594940013Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403386    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594341936Z" level=info msg="shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	I0728 17:59:16.403396    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599531022Z" level=warning msg="cleaning up after shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	I0728 17:59:16.403405    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599564549Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403492    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594363171Z" level=info msg="shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	I0728 17:59:16.403510    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603697728Z" level=warning msg="cleaning up after shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	I0728 17:59:16.403517    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603706128Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403528    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1271]: time="2024-07-29T00:58:10.446248538Z" level=info msg="ignoring event" container=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403538    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446453571Z" level=info msg="shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	I0728 17:59:16.403548    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446483266Z" level=warning msg="cleaning up after shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	I0728 17:59:16.403555    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446489626Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403572    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.437850835Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924
	I0728 17:59:16.403584    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.461680643Z" level=info msg="ignoring event" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403593    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462134272Z" level=info msg="shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	I0728 17:59:16.403604    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462432578Z" level=warning msg="cleaning up after shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	I0728 17:59:16.403611    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462709085Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403621    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.480818399Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 17:59:16.403628    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481284133Z" level=info msg="Daemon shutdown complete"
	I0728 17:59:16.403638    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481351043Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 17:59:16.403648    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481513507Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 17:59:16.403658    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	I0728 17:59:16.403666    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	I0728 17:59:16.403673    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Consumed 2.317s CPU time.
	I0728 17:59:16.403686    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.403696    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 dockerd[3649]: time="2024-07-29T00:58:16.519764667Z" level=info msg="Starting up"
	I0728 17:59:16.403704    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 dockerd[3649]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0728 17:59:16.403716    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0728 17:59:16.403721    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0728 17:59:16.403735    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 systemd[1]: Failed to start Docker Application Container Engine.
	I0728 17:59:16.437925    2067 out.go:177] 
	W0728 17:59:16.458779    2067 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 00:57:13 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797365474Z" level=info msg="Starting up"
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797812498Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.799746278Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.817209839Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833006693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833027623Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833063048Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833073437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833127019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833187696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833331655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833366436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833378117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833385070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833441900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833582244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835042594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835101927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835241609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835284736Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835372957Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835438009Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837622113Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837721038Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837768434Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837808041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837840429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837936427Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838141537Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838308394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838347183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838384605Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838419232Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838451200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838482769Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838513376Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838546249Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838577148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838606171Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838634886Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838675799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838712449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838744137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838773905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838803063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838838392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838872381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838902742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838935507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838966734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838994870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839022479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839050538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839129561Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839170342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839201357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839229605Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839300959Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839344419Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839377180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839407452Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839436175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839464659Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839492819Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839668472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839754400Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839823157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839861606Z" level=info msg="containerd successfully booted in 0.023368s"
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.840311727Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.846796524Z" level=info msg="Loading containers: start."
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.931863378Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.016652031Z" level=info msg="Loading containers: done."
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023601347Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023702083Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056431503Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:15 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056529625Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.221309736Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:57:16 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222558264Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222867738Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222936309Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222951150Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:57:17 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:57:17 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:57:17 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251533872Z" level=info msg="Starting up"
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251992238Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.252592079Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=921
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.268000022Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283126898Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283245051Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283296543Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283329167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283372267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283410007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283528327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283565809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283595793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283624050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283661411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283760929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285373046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285426942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285565612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285609205Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285647249Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285681508Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285827566Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285877187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285910515Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285942139Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285973140Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286024088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286256555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286331375Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286365544Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286394955Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286424527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286453657Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286484741Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286516234Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286546601Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286579857Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286611348Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286641030Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286674739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286706453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286744971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286779178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286808354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286841128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286870616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286899451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286928600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286965950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286999059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287027761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287057255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287089564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287124670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287221056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287260008Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287333254Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287377987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287446465Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287477602Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287506315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287535151Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287565710Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287745237Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287832539Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287924952Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287968311Z" level=info msg="containerd successfully booted in 0.020373s"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.331881234Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.335683791Z" level=info msg="Loading containers: start."
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.404366470Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.461547560Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.503511121Z" level=info msg="Loading containers: done."
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521014736Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521083688Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.540963112Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:18 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.542092231Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.000429486Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001308281Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001458767Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001520154Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001554783Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:57:23 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:57:24 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:57:24 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:57:24 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.044513551Z" level=info msg="Starting up"
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045165961Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045779957Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1278
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.063819849Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078790454Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078861840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078909723Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078942873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078982590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079016511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079177290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079221517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079256669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079285006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079322780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079417461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.080975138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081019961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081189849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081230906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081268915Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081307449Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081514588Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081566132Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081599424Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081630245Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081660433Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081711134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081935254Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082003682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082071378Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082106832Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082141456Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082171351Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082199983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082230279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082259644Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082288397Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082316493Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082344152Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082389242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082427480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082458087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082487933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082526801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082561143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082590891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082620127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082660502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082695658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082725026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082756282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082785403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082815558Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082849349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082880362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082908909Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082981072Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083071337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083112046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083141558Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083173553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083204127Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083234220Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083428164Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083514894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083575557Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083620565Z" level=info msg="containerd successfully booted in 0.020314s"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.066266767Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.253647977Z" level=info msg="Loading containers: start."
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.324491630Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.382701703Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.431702433Z" level=info msg="Loading containers: done."
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440864156Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440919518Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461512437Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461664145Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:25 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260281303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260392108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260412572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260489352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276138579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276301037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276372584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276521849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.306891402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307066345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307094251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307168510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311048212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311102810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311112372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311392763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477710685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477915589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477973011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.478174177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494763986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494800644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494808461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494862529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502898043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502995270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503073968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503177666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514475802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514545542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514558720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514861602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352521512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352642496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352656093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352791637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466457350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466735785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466880396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.467238809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.588902278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589163604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589274541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589440546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647495237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647976971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648164904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648777321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931384339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931493404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931506590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931657800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162455309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162701812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162759021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.163278524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398231755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398332961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398346800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398679657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496031526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496097397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496109988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496427740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.034495755Z" level=info msg="shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.034611180Z" level=info msg="ignoring event" container=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035089465Z" level=warning msg="cleaning up after shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035158793Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.111407350Z" level=info msg="ignoring event" container=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111763077Z" level=info msg="shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111847732Z" level=warning msg="cleaning up after shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111857207Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.123414689Z" level=warning msg="cleanup warnings time=\"2024-07-29T00:58:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.395458525Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:58:05 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448770229Z" level=info msg="shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448865323Z" level=warning msg="cleaning up after shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448875148Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.449287739Z" level=info msg="ignoring event" container=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.499547099Z" level=info msg="ignoring event" container=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.499966665Z" level=info msg="shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500680178Z" level=warning msg="cleaning up after shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500689740Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.552833990Z" level=info msg="ignoring event" container=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553672267Z" level=info msg="shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553743408Z" level=warning msg="cleaning up after shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553752377Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553855742Z" level=info msg="shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554337023Z" level=warning msg="cleaning up after shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554382869Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.554596147Z" level=info msg="ignoring event" container=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.558112577Z" level=info msg="ignoring event" container=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558368677Z" level=info msg="shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558831783Z" level=warning msg="cleaning up after shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558877595Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.562511968Z" level=info msg="ignoring event" container=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562633349Z" level=info msg="shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562699850Z" level=warning msg="cleaning up after shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562708631Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.573772031Z" level=info msg="ignoring event" container=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574181868Z" level=info msg="shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574256709Z" level=warning msg="cleaning up after shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574265704Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584212617Z" level=info msg="shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584332022Z" level=warning msg="cleaning up after shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584390716Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589054926Z" level=info msg="ignoring event" container=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589169542Z" level=info msg="ignoring event" container=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589300211Z" level=info msg="ignoring event" container=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591696979Z" level=info msg="shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591753738Z" level=warning msg="cleaning up after shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591762049Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.592142540Z" level=info msg="ignoring event" container=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.593743099Z" level=info msg="shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.594556393Z" level=info msg="ignoring event" container=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594895783Z" level=warning msg="cleaning up after shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594940013Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594341936Z" level=info msg="shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599531022Z" level=warning msg="cleaning up after shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599564549Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594363171Z" level=info msg="shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603697728Z" level=warning msg="cleaning up after shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603706128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1271]: time="2024-07-29T00:58:10.446248538Z" level=info msg="ignoring event" container=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446453571Z" level=info msg="shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446483266Z" level=warning msg="cleaning up after shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446489626Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.437850835Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.461680643Z" level=info msg="ignoring event" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462134272Z" level=info msg="shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462432578Z" level=warning msg="cleaning up after shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462709085Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.480818399Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481284133Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481351043Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481513507Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:58:16 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Consumed 2.317s CPU time.
	Jul 29 00:58:16 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:58:16 functional-596000 dockerd[3649]: time="2024-07-29T00:58:16.519764667Z" level=info msg="Starting up"
	Jul 29 00:59:16 functional-596000 dockerd[3649]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 00:59:16 functional-596000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0728 17:59:16.459445    2067 out.go:239] * 
	W0728 17:59:16.460660    2067 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 17:59:16.543445    2067 out.go:177] 
	
	
	==> Docker <==
	Jul 29 01:01:17 functional-596000 dockerd[4353]: time="2024-07-29T01:01:17.133091596Z" level=info msg="Starting up"
	Jul 29 01:02:17 functional-596000 dockerd[4353]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="error getting RW layer size for container ID 'c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924'"
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="error getting RW layer size for container ID '019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="Set backoffDuration to : 1m0s for container ID '019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf'"
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="error getting RW layer size for container ID 'fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6'"
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="error getting RW layer size for container ID '411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="Set backoffDuration to : 1m0s for container ID '411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb'"
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="error getting RW layer size for container ID 'dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5'"
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="error getting RW layer size for container ID '15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="Set backoffDuration to : 1m0s for container ID '15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad'"
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="error getting RW layer size for container ID '1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed'"
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="error getting RW layer size for container ID 'cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:02:17 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:02:17Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634'"
	Jul 29 01:02:17 functional-596000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 01:02:17 functional-596000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 01:02:17 functional-596000 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 29 01:02:17 functional-596000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jul 29 01:02:17 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 01:02:17 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-29T01:02:19Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.071501] systemd-fstab-generator[907]: Ignoring "noauto" option for root device
	[  +2.464238] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +0.103266] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.116452] systemd-fstab-generator[1145]: Ignoring "noauto" option for root device
	[  +0.130252] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +3.974695] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	[  +0.052634] kauditd_printk_skb: 158 callbacks suppressed
	[  +2.632602] systemd-fstab-generator[1511]: Ignoring "noauto" option for root device
	[  +4.717931] systemd-fstab-generator[1694]: Ignoring "noauto" option for root device
	[  +0.052232] kauditd_printk_skb: 70 callbacks suppressed
	[  +4.965900] systemd-fstab-generator[2101]: Ignoring "noauto" option for root device
	[  +0.068473] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.556217] systemd-fstab-generator[2344]: Ignoring "noauto" option for root device
	[  +0.144175] kauditd_printk_skb: 12 callbacks suppressed
	[Jul29 00:58] kauditd_printk_skb: 98 callbacks suppressed
	[  +3.703331] systemd-fstab-generator[3180]: Ignoring "noauto" option for root device
	[  +0.280018] systemd-fstab-generator[3216]: Ignoring "noauto" option for root device
	[  +0.136220] systemd-fstab-generator[3228]: Ignoring "noauto" option for root device
	[  +0.135284] systemd-fstab-generator[3242]: Ignoring "noauto" option for root device
	[  +5.159757] kauditd_printk_skb: 101 callbacks suppressed
	[Jul29 01:02] clocksource: timekeeping watchdog on CPU0: Marking clocksource 'tsc' as unstable because the skew is too large:
	[  +0.000049] clocksource:                       'hpet' wd_now: b6c345a4 wd_last: b5ef4422 mask: ffffffff
	[  +0.000044] clocksource:                       'tsc' cs_now: 587809d696b cs_last: 586789366bd mask: ffffffffffffffff
	[  +0.000172] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
	[  +0.000295] clocksource: Checking clocksource tsc synchronization from CPU 0.
	
	
	==> kernel <==
	 01:03:17 up 6 min,  0 users,  load average: 0.00, 0.07, 0.05
	Linux functional-596000 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 01:03:11 functional-596000 kubelet[2108]: E0729 01:03:11.976225    2108 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-596000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-596000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:03:11 functional-596000 kubelet[2108]: E0729 01:03:11.976357    2108 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 29 01:03:12 functional-596000 kubelet[2108]: E0729 01:03:12.956256    2108 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 5m7.94562875s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 29 01:03:13 functional-596000 kubelet[2108]: E0729 01:03:13.031479    2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-596000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused" interval="7s"
	Jul 29 01:03:15 functional-596000 kubelet[2108]: E0729 01:03:15.182269    2108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 192.169.0.4:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-functional-596000.17e689221bb8c1fc  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-functional-596000,UID:380a931fa2b3ce8bb8f4f569b3423cf2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-596000,},FirstTimestamp:2024-07-29 00:58:07.1027799 +0000 UTC m=+31.650216464,LastTimestamp:2024-07-29 00:58:07.1027799 +0000
UTC m=+31.650216464,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-596000,}"
	Jul 29 01:03:15 functional-596000 kubelet[2108]: I0729 01:03:15.544537    2108 status_manager.go:853] "Failed to get status for pod" podUID="471ce4342a500a995eaa994abbd56071" pod="kube-system/kube-apiserver-functional-596000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-596000\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.372329    2108 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.373318    2108 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.374563    2108 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.373544    2108 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.374720    2108 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.374740    2108 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.374447    2108 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.374804    2108 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.374191    2108 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.374849    2108 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: I0729 01:03:17.374869    2108 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.374222    2108 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.374934    2108 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.374283    2108 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.375039    2108 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: I0729 01:03:17.375081    2108 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.376319    2108 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.376374    2108 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 29 01:03:17 functional-596000 kubelet[2108]: E0729 01:03:17.376503    2108 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 18:02:16.896535    2385 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:02:16.911759    2385 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:02:16.926055    2385 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:02:16.941714    2385 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:02:16.955694    2385 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:02:16.970553    2385 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:02:16.985642    2385 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:02:16.998946    2385 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-596000 -n functional-596000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-596000 -n functional-596000: exit status 2 (159.694568ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-596000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (120.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-596000 ssh sudo crictl images: exit status 1 (2.147747512s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1126: failed to get images by "out/minikube-darwin-amd64 -p functional-596000 ssh sudo crictl images" ssh exit status 1
functional_test.go:1130: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (180.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh sudo docker rmi registry.k8s.io/pause:latest
E0728 18:10:50.008346    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-596000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 1 (57.711916736s)

                                                
                                                
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-amd64 -p functional-596000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 1
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-596000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (2.157191823s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 cache reload
E0728 18:12:13.101614    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
functional_test.go:1158: (dbg) Done: out/minikube-darwin-amd64 -p functional-596000 cache reload: (1m58.239979192s)
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-596000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (2.157303569s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1165: expected "out/minikube-darwin-amd64 -p functional-596000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 1
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (180.27s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (120.27s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 kubectl -- --context functional-596000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-596000 kubectl -- --context functional-596000 get pods: exit status 1 (2.416077656s)

                                                
                                                
** stderr ** 
	E0728 18:15:21.792513    2577 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	E0728 18:15:21.893611    2577 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	E0728 18:15:21.995844    2577 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	E0728 18:15:22.097078    2577 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	E0728 18:15:22.197872    2577 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	The connection to the server 192.169.0.4:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-amd64 -p functional-596000 kubectl -- --context functional-596000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-596000 -n functional-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-596000 -n functional-596000: exit status 2 (154.663285ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 logs -n 25
E0728 18:15:50.039776    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-596000 logs -n 25: (1m57.501527362s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                              Args                              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| pause   | nospam-292000 --log_dir                                        | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 |                   |         |         |                     |                     |
	|         | pause                                                          |                   |         |         |                     |                     |
	| unpause | nospam-292000 --log_dir                                        | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| unpause | nospam-292000 --log_dir                                        | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| unpause | nospam-292000 --log_dir                                        | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| stop    | nospam-292000 --log_dir                                        | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| stop    | nospam-292000 --log_dir                                        | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:55 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| stop    | nospam-292000 --log_dir                                        | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:55 PDT | 28 Jul 24 17:56 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| delete  | -p nospam-292000                                               | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:57 PDT |
	| start   | -p functional-596000                                           | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:58 PDT |
	|         | --memory=4000                                                  |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                          |                   |         |         |                     |                     |
	|         | --wait=all --driver=hyperkit                                   |                   |         |         |                     |                     |
	| start   | -p functional-596000                                           | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 17:58 PDT |                     |
	|         | --alsologtostderr -v=8                                         |                   |         |         |                     |                     |
	| cache   | functional-596000 cache add                                    | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:03 PDT | 28 Jul 24 18:05 PDT |
	|         | registry.k8s.io/pause:3.1                                      |                   |         |         |                     |                     |
	| cache   | functional-596000 cache add                                    | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:05 PDT | 28 Jul 24 18:07 PDT |
	|         | registry.k8s.io/pause:3.3                                      |                   |         |         |                     |                     |
	| cache   | functional-596000 cache add                                    | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:07 PDT | 28 Jul 24 18:09 PDT |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | functional-596000 cache add                                    | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:09 PDT | 28 Jul 24 18:10 PDT |
	|         | minikube-local-cache-test:functional-596000                    |                   |         |         |                     |                     |
	| cache   | functional-596000 cache delete                                 | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:10 PDT | 28 Jul 24 18:10 PDT |
	|         | minikube-local-cache-test:functional-596000                    |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.33.1 | 28 Jul 24 18:10 PDT | 28 Jul 24 18:10 PDT |
	|         | registry.k8s.io/pause:3.3                                      |                   |         |         |                     |                     |
	| cache   | list                                                           | minikube          | jenkins | v1.33.1 | 28 Jul 24 18:10 PDT | 28 Jul 24 18:10 PDT |
	| ssh     | functional-596000 ssh sudo                                     | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:10 PDT |                     |
	|         | crictl images                                                  |                   |         |         |                     |                     |
	| ssh     | functional-596000                                              | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:10 PDT |                     |
	|         | ssh sudo docker rmi                                            |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| ssh     | functional-596000 ssh                                          | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:11 PDT |                     |
	|         | sudo crictl inspecti                                           |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | functional-596000 cache reload                                 | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:11 PDT | 28 Jul 24 18:13 PDT |
	| ssh     | functional-596000 ssh                                          | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:13 PDT |                     |
	|         | sudo crictl inspecti                                           |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.33.1 | 28 Jul 24 18:13 PDT | 28 Jul 24 18:13 PDT |
	|         | registry.k8s.io/pause:3.1                                      |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.33.1 | 28 Jul 24 18:13 PDT | 28 Jul 24 18:13 PDT |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| kubectl | functional-596000 kubectl --                                   | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:15 PDT |                     |
	|         | --context functional-596000                                    |                   |         |         |                     |                     |
	|         | get pods                                                       |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/28 17:58:03
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 17:58:03.181908    2067 out.go:291] Setting OutFile to fd 1 ...
	I0728 17:58:03.182088    2067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:58:03.182094    2067 out.go:304] Setting ErrFile to fd 2...
	I0728 17:58:03.182098    2067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:58:03.182279    2067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 17:58:03.183681    2067 out.go:298] Setting JSON to false
	I0728 17:58:03.206318    2067 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1654,"bootTime":1722213029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 17:58:03.206416    2067 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 17:58:03.227676    2067 out.go:177] * [functional-596000] minikube v1.33.1 on Darwin 14.5
	I0728 17:58:03.269722    2067 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 17:58:03.269783    2067 notify.go:220] Checking for updates...
	I0728 17:58:03.312443    2067 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 17:58:03.333527    2067 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 17:58:03.354627    2067 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 17:58:03.375824    2067 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 17:58:03.396566    2067 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 17:58:03.417974    2067 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 17:58:03.418146    2067 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 17:58:03.418798    2067 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:58:03.418872    2067 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 17:58:03.428211    2067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50175
	I0728 17:58:03.428568    2067 main.go:141] libmachine: () Calling .GetVersion
	I0728 17:58:03.428964    2067 main.go:141] libmachine: Using API Version  1
	I0728 17:58:03.428979    2067 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 17:58:03.429182    2067 main.go:141] libmachine: () Calling .GetMachineName
	I0728 17:58:03.429300    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:03.457784    2067 out.go:177] * Using the hyperkit driver based on existing profile
	I0728 17:58:03.499269    2067 start.go:297] selected driver: hyperkit
	I0728 17:58:03.499285    2067 start.go:901] validating driver "hyperkit" against &{Name:functional-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:58:03.499388    2067 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 17:58:03.499488    2067 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:58:03.499604    2067 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 17:58:03.508339    2067 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 17:58:03.512503    2067 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:58:03.512529    2067 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 17:58:03.515340    2067 cni.go:84] Creating CNI manager for ""
	I0728 17:58:03.515390    2067 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 17:58:03.515469    2067 start.go:340] cluster config:
	{Name:functional-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-596000 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:58:03.515565    2067 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:58:03.573559    2067 out.go:177] * Starting "functional-596000" primary control-plane node in "functional-596000" cluster
	I0728 17:58:03.610472    2067 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 17:58:03.610521    2067 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 17:58:03.610545    2067 cache.go:56] Caching tarball of preloaded images
	I0728 17:58:03.610741    2067 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 17:58:03.610759    2067 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 17:58:03.610882    2067 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/config.json ...
	I0728 17:58:03.611579    2067 start.go:360] acquireMachinesLock for functional-596000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 17:58:03.611656    2067 start.go:364] duration metric: took 61.959µs to acquireMachinesLock for "functional-596000"
	I0728 17:58:03.611681    2067 start.go:96] Skipping create...Using existing machine configuration
	I0728 17:58:03.611696    2067 fix.go:54] fixHost starting: 
	I0728 17:58:03.612004    2067 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:58:03.612033    2067 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 17:58:03.621321    2067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50177
	I0728 17:58:03.621639    2067 main.go:141] libmachine: () Calling .GetVersion
	I0728 17:58:03.622002    2067 main.go:141] libmachine: Using API Version  1
	I0728 17:58:03.622022    2067 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 17:58:03.622230    2067 main.go:141] libmachine: () Calling .GetMachineName
	I0728 17:58:03.622342    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:03.622436    2067 main.go:141] libmachine: (functional-596000) Calling .GetState
	I0728 17:58:03.622567    2067 main.go:141] libmachine: (functional-596000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 17:58:03.622651    2067 main.go:141] libmachine: (functional-596000) DBG | hyperkit pid from json: 2051
	I0728 17:58:03.623593    2067 fix.go:112] recreateIfNeeded on functional-596000: state=Running err=<nil>
	W0728 17:58:03.623608    2067 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 17:58:03.644584    2067 out.go:177] * Updating the running hyperkit "functional-596000" VM ...
	I0728 17:58:03.686410    2067 machine.go:94] provisionDockerMachine start ...
	I0728 17:58:03.686443    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:03.686748    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.686992    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.687220    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.687442    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.687672    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.687922    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:03.688298    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:03.688318    2067 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 17:58:03.737887    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-596000
	
	I0728 17:58:03.737901    2067 main.go:141] libmachine: (functional-596000) Calling .GetMachineName
	I0728 17:58:03.738050    2067 buildroot.go:166] provisioning hostname "functional-596000"
	I0728 17:58:03.738062    2067 main.go:141] libmachine: (functional-596000) Calling .GetMachineName
	I0728 17:58:03.738158    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.738247    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.738335    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.738433    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.738522    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.738660    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:03.738789    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:03.738804    2067 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-596000 && echo "functional-596000" | sudo tee /etc/hostname
	I0728 17:58:03.799001    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-596000
	
	I0728 17:58:03.799026    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.799176    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.799262    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.799342    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.799457    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.799594    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:03.799743    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:03.799755    2067 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-596000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-596000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-596000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 17:58:03.848940    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 17:58:03.848963    2067 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 17:58:03.848979    2067 buildroot.go:174] setting up certificates
	I0728 17:58:03.848994    2067 provision.go:84] configureAuth start
	I0728 17:58:03.849001    2067 main.go:141] libmachine: (functional-596000) Calling .GetMachineName
	I0728 17:58:03.849120    2067 main.go:141] libmachine: (functional-596000) Calling .GetIP
	I0728 17:58:03.849210    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.849295    2067 provision.go:143] copyHostCerts
	I0728 17:58:03.849323    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 17:58:03.849389    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 17:58:03.849397    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 17:58:03.849587    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 17:58:03.849823    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 17:58:03.849865    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 17:58:03.849873    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 17:58:03.850017    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 17:58:03.850186    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 17:58:03.850225    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 17:58:03.850230    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 17:58:03.850308    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 17:58:03.850449    2067 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.functional-596000 san=[127.0.0.1 192.169.0.4 functional-596000 localhost minikube]
	I0728 17:58:03.967853    2067 provision.go:177] copyRemoteCerts
	I0728 17:58:03.967921    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 17:58:03.967939    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.968094    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.968192    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.968299    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.968393    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.001708    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 17:58:04.001790    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 17:58:04.022827    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 17:58:04.022891    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0728 17:58:04.042748    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 17:58:04.042810    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 17:58:04.062503    2067 provision.go:87] duration metric: took 213.493856ms to configureAuth
	I0728 17:58:04.062518    2067 buildroot.go:189] setting minikube options for container-runtime
	I0728 17:58:04.062657    2067 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 17:58:04.062674    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.062814    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.062907    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.062999    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.063076    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.063159    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.063261    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.063390    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.063398    2067 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 17:58:04.115857    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 17:58:04.115869    2067 buildroot.go:70] root file system type: tmpfs
	I0728 17:58:04.115942    2067 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 17:58:04.115956    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.116086    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.116177    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.116266    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.116359    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.116490    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.116628    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.116676    2067 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 17:58:04.180807    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 17:58:04.180831    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.180961    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.181052    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.181141    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.181233    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.181369    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.181514    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.181526    2067 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 17:58:04.236936    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 17:58:04.236950    2067 machine.go:97] duration metric: took 550.516869ms to provisionDockerMachine
	I0728 17:58:04.236962    2067 start.go:293] postStartSetup for "functional-596000" (driver="hyperkit")
	I0728 17:58:04.236969    2067 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 17:58:04.236980    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.237151    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 17:58:04.237167    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.237259    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.237356    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.237450    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.237524    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.269248    2067 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 17:58:04.272370    2067 command_runner.go:130] > NAME=Buildroot
	I0728 17:58:04.272378    2067 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0728 17:58:04.272381    2067 command_runner.go:130] > ID=buildroot
	I0728 17:58:04.272385    2067 command_runner.go:130] > VERSION_ID=2023.02.9
	I0728 17:58:04.272389    2067 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0728 17:58:04.272475    2067 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 17:58:04.272491    2067 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 17:58:04.272591    2067 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 17:58:04.272782    2067 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 17:58:04.272789    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /etc/ssl/certs/15332.pem
	I0728 17:58:04.272981    2067 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/test/nested/copy/1533/hosts -> hosts in /etc/test/nested/copy/1533
	I0728 17:58:04.272987    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/test/nested/copy/1533/hosts -> /etc/test/nested/copy/1533/hosts
	I0728 17:58:04.273049    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1533
	I0728 17:58:04.281301    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 17:58:04.301144    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/test/nested/copy/1533/hosts --> /etc/test/nested/copy/1533/hosts (40 bytes)
	I0728 17:58:04.321194    2067 start.go:296] duration metric: took 84.223294ms for postStartSetup
	I0728 17:58:04.321219    2067 fix.go:56] duration metric: took 709.52621ms for fixHost
	I0728 17:58:04.321235    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.321378    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.321458    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.321552    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.321634    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.321767    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.321915    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.321922    2067 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0728 17:58:04.372672    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722214684.480661733
	
	I0728 17:58:04.372686    2067 fix.go:216] guest clock: 1722214684.480661733
	I0728 17:58:04.372691    2067 fix.go:229] Guest: 2024-07-28 17:58:04.480661733 -0700 PDT Remote: 2024-07-28 17:58:04.321226 -0700 PDT m=+1.173910037 (delta=159.435733ms)
	I0728 17:58:04.372708    2067 fix.go:200] guest clock delta is within tolerance: 159.435733ms
	I0728 17:58:04.372712    2067 start.go:83] releasing machines lock for "functional-596000", held for 761.044153ms
	I0728 17:58:04.372731    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.372854    2067 main.go:141] libmachine: (functional-596000) Calling .GetIP
	I0728 17:58:04.372965    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.373253    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.373372    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.373450    2067 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 17:58:04.373485    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.373513    2067 ssh_runner.go:195] Run: cat /version.json
	I0728 17:58:04.373523    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.373581    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.373615    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.373688    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.373706    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.373784    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.373796    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.373868    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.373891    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.444486    2067 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0728 17:58:04.445070    2067 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0728 17:58:04.445228    2067 ssh_runner.go:195] Run: systemctl --version
	I0728 17:58:04.449759    2067 command_runner.go:130] > systemd 252 (252)
	I0728 17:58:04.449776    2067 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0728 17:58:04.450022    2067 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0728 17:58:04.454258    2067 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0728 17:58:04.454279    2067 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 17:58:04.454319    2067 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 17:58:04.462388    2067 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0728 17:58:04.462398    2067 start.go:495] detecting cgroup driver to use...
	I0728 17:58:04.462514    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 17:58:04.477917    2067 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0728 17:58:04.478151    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 17:58:04.487863    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 17:58:04.497357    2067 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 17:58:04.497404    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 17:58:04.507132    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 17:58:04.516475    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 17:58:04.526165    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 17:58:04.535504    2067 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 17:58:04.545511    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 17:58:04.554731    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 17:58:04.563973    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 17:58:04.573675    2067 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 17:58:04.582020    2067 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0728 17:58:04.582227    2067 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 17:58:04.591135    2067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 17:58:04.729887    2067 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 17:58:04.749030    2067 start.go:495] detecting cgroup driver to use...
	I0728 17:58:04.749107    2067 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 17:58:04.763070    2067 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0728 17:58:04.763645    2067 command_runner.go:130] > [Unit]
	I0728 17:58:04.763655    2067 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 17:58:04.763659    2067 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 17:58:04.763664    2067 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0728 17:58:04.763668    2067 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0728 17:58:04.763673    2067 command_runner.go:130] > StartLimitBurst=3
	I0728 17:58:04.763676    2067 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 17:58:04.763680    2067 command_runner.go:130] > [Service]
	I0728 17:58:04.763686    2067 command_runner.go:130] > Type=notify
	I0728 17:58:04.763691    2067 command_runner.go:130] > Restart=on-failure
	I0728 17:58:04.763696    2067 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 17:58:04.763711    2067 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 17:58:04.763718    2067 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 17:58:04.763723    2067 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 17:58:04.763729    2067 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 17:58:04.763734    2067 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 17:58:04.763741    2067 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 17:58:04.763754    2067 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 17:58:04.763760    2067 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 17:58:04.763763    2067 command_runner.go:130] > ExecStart=
	I0728 17:58:04.763777    2067 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0728 17:58:04.763782    2067 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 17:58:04.763788    2067 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 17:58:04.763795    2067 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 17:58:04.763798    2067 command_runner.go:130] > LimitNOFILE=infinity
	I0728 17:58:04.763802    2067 command_runner.go:130] > LimitNPROC=infinity
	I0728 17:58:04.763807    2067 command_runner.go:130] > LimitCORE=infinity
	I0728 17:58:04.763811    2067 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 17:58:04.763815    2067 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 17:58:04.763824    2067 command_runner.go:130] > TasksMax=infinity
	I0728 17:58:04.763828    2067 command_runner.go:130] > TimeoutStartSec=0
	I0728 17:58:04.763833    2067 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 17:58:04.763837    2067 command_runner.go:130] > Delegate=yes
	I0728 17:58:04.763842    2067 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 17:58:04.763846    2067 command_runner.go:130] > KillMode=process
	I0728 17:58:04.763849    2067 command_runner.go:130] > [Install]
	I0728 17:58:04.763857    2067 command_runner.go:130] > WantedBy=multi-user.target
	I0728 17:58:04.763963    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 17:58:04.775171    2067 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 17:58:04.803670    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 17:58:04.815918    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 17:58:04.827728    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 17:58:04.842925    2067 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 17:58:04.843170    2067 ssh_runner.go:195] Run: which cri-dockerd
	I0728 17:58:04.846059    2067 command_runner.go:130] > /usr/bin/cri-dockerd
	I0728 17:58:04.846245    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 17:58:04.854364    2067 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 17:58:04.868292    2067 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 17:58:05.006256    2067 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 17:58:05.135902    2067 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 17:58:05.135971    2067 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 17:58:05.150351    2067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 17:58:05.274841    2067 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 17:59:16.388765    2067 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0728 17:59:16.388780    2067 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0728 17:59:16.388791    2067 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.113588859s)
	I0728 17:59:16.388851    2067 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0728 17:59:16.398150    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.398166    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797365474Z" level=info msg="Starting up"
	I0728 17:59:16.398196    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797812498Z" level=info msg="containerd not running, starting managed containerd"
	I0728 17:59:16.398214    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.799746278Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	I0728 17:59:16.398223    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.817209839Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 17:59:16.398235    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833006693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 17:59:16.398246    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833027623Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 17:59:16.398255    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833063048Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 17:59:16.398264    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833073437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398274    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833127019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398283    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833187696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398302    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833331655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398312    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833366436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398323    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833378117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398332    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833385070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398342    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833441900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398350    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833582244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398364    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835042594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398374    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835101927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398432    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835241609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398446    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835284736Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 17:59:16.398456    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835372957Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 17:59:16.398464    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835438009Z" level=info msg="metadata content store policy set" policy=shared
	I0728 17:59:16.398472    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837622113Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 17:59:16.398481    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837721038Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 17:59:16.398490    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837768434Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 17:59:16.398500    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837808041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 17:59:16.398509    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837840429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 17:59:16.398518    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837936427Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 17:59:16.398527    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838141537Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.398536    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838308394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.398544    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838347183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 17:59:16.398554    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838384605Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 17:59:16.398566    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838419232Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398576    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838451200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398585    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838482769Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398594    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838513376Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398604    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838546249Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398614    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838577148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398624    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838606171Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398900    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838634886Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398913    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838675799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398921    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838712449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398929    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838744137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398938    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838773905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398946    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838803063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398955    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838838392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398963    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838872381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398971    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838902742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398980    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838935507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398994    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838966734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399003    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838994870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399011    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839022479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399019    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839050538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399028    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839129561Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 17:59:16.399037    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839170342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399045    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839201357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399054    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839229605Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 17:59:16.399063    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839300959Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 17:59:16.399075    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839344419Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 17:59:16.399084    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839377180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 17:59:16.399288    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839407452Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 17:59:16.399301    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839436175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399321    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839464659Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 17:59:16.399330    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839492819Z" level=info msg="NRI interface is disabled by configuration."
	I0728 17:59:16.399339    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839668472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 17:59:16.399347    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839754400Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 17:59:16.399355    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839823157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 17:59:16.399363    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839861606Z" level=info msg="containerd successfully booted in 0.023368s"
	I0728 17:59:16.399371    2067 command_runner.go:130] > Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.840311727Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 17:59:16.399378    2067 command_runner.go:130] > Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.846796524Z" level=info msg="Loading containers: start."
	I0728 17:59:16.399399    2067 command_runner.go:130] > Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.931863378Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 17:59:16.399408    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.016652031Z" level=info msg="Loading containers: done."
	I0728 17:59:16.399429    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023601347Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 17:59:16.399457    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023702083Z" level=info msg="Daemon has completed initialization"
	I0728 17:59:16.399464    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056431503Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 17:59:16.399492    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 systemd[1]: Started Docker Application Container Engine.
	I0728 17:59:16.399501    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056529625Z" level=info msg="API listen on [::]:2376"
	I0728 17:59:16.399507    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.221309736Z" level=info msg="Processing signal 'terminated'"
	I0728 17:59:16.399513    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	I0728 17:59:16.399522    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222558264Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 17:59:16.399528    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222867738Z" level=info msg="Daemon shutdown complete"
	I0728 17:59:16.399545    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222936309Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 17:59:16.399553    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222951150Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 17:59:16.399559    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	I0728 17:59:16.399564    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	I0728 17:59:16.399574    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.399581    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251533872Z" level=info msg="Starting up"
	I0728 17:59:16.399696    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251992238Z" level=info msg="containerd not running, starting managed containerd"
	I0728 17:59:16.399709    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.252592079Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=921
	I0728 17:59:16.399718    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.268000022Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 17:59:16.399726    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283126898Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 17:59:16.399735    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283245051Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 17:59:16.399744    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283296543Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 17:59:16.399753    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283329167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399767    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283372267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399777    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283410007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399792    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283528327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399801    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283565809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399812    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283595793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399821    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283624050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399831    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283661411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399840    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283760929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399853    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285373046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399863    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285426942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399876    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285565612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399910    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285609205Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 17:59:16.399925    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285647249Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 17:59:16.399934    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285681508Z" level=info msg="metadata content store policy set" policy=shared
	I0728 17:59:16.399943    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285827566Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 17:59:16.399952    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285877187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 17:59:16.399961    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285910515Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 17:59:16.399969    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285942139Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 17:59:16.399980    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285973140Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 17:59:16.399991    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286024088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 17:59:16.400000    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286256555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.400009    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286331375Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.400021    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286365544Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 17:59:16.400031    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286394955Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 17:59:16.400040    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286424527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400050    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286453657Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400059    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286484741Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400068    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286516234Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400077    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286546601Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400086    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286579857Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400096    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286611348Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400105    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286641030Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400173    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286674739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400185    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286706453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400194    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286744971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400203    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286779178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400216    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286808354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400225    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286841128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400234    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286870616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400243    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286899451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400251    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286928600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400260    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286965950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400269    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286999059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400278    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287027761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400286    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287057255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400295    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287089564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 17:59:16.400304    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287124670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400312    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287221056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400321    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287260008Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 17:59:16.400332    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287333254Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 17:59:16.400344    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287377987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 17:59:16.400354    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287446465Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 17:59:16.400365    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287477602Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 17:59:16.400375    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287506315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400543    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287535151Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 17:59:16.400553    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287565710Z" level=info msg="NRI interface is disabled by configuration."
	I0728 17:59:16.400561    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287745237Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 17:59:16.400572    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287832539Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 17:59:16.400580    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287924952Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 17:59:16.400588    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287968311Z" level=info msg="containerd successfully booted in 0.020373s"
	I0728 17:59:16.400596    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.331881234Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 17:59:16.400604    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.335683791Z" level=info msg="Loading containers: start."
	I0728 17:59:16.400623    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.404366470Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 17:59:16.400634    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.461547560Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0728 17:59:16.400642    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.503511121Z" level=info msg="Loading containers: done."
	I0728 17:59:16.400652    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521014736Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 17:59:16.400659    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521083688Z" level=info msg="Daemon has completed initialization"
	I0728 17:59:16.400669    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.540963112Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 17:59:16.400676    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 systemd[1]: Started Docker Application Container Engine.
	I0728 17:59:16.400683    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.542092231Z" level=info msg="API listen on [::]:2376"
	I0728 17:59:16.400691    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.000429486Z" level=info msg="Processing signal 'terminated'"
	I0728 17:59:16.400701    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001308281Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 17:59:16.400716    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001458767Z" level=info msg="Daemon shutdown complete"
	I0728 17:59:16.400730    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001520154Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 17:59:16.400739    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001554783Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 17:59:16.400746    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	I0728 17:59:16.400751    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	I0728 17:59:16.400757    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	I0728 17:59:16.400763    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.400770    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.044513551Z" level=info msg="Starting up"
	I0728 17:59:16.400830    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045165961Z" level=info msg="containerd not running, starting managed containerd"
	I0728 17:59:16.400840    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045779957Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1278
	I0728 17:59:16.400849    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.063819849Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 17:59:16.400859    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078790454Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 17:59:16.400881    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078861840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 17:59:16.400890    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078909723Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 17:59:16.400899    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078942873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400909    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078982590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.400918    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079016511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400934    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079177290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.400942    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079221517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400956    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079256669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.400968    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079285006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400977    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079322780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400989    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079417461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.401003    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.080975138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.401012    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081019961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.401028    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081189849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.401037    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081230906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 17:59:16.401046    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081268915Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 17:59:16.401054    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081307449Z" level=info msg="metadata content store policy set" policy=shared
	I0728 17:59:16.401063    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081514588Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 17:59:16.401072    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081566132Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 17:59:16.401081    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081599424Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 17:59:16.401092    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081630245Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 17:59:16.401101    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081660433Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 17:59:16.401110    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081711134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 17:59:16.401119    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081935254Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.401131    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082003682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.401140    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082071378Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 17:59:16.401150    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082106832Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 17:59:16.401160    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082141456Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401169    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082171351Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401178    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082199983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401199    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082230279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401209    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082259644Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401218    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082288397Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401228    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082316493Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401241    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082344152Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401289    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082389242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401303    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082427480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401312    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082458087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401322    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082487933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401330    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082526801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401339    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082561143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401348    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082590891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401357    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082620127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401366    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082660502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401376    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082695658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401385    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082725026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401394    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082756282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401403    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082785403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401412    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082815558Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 17:59:16.401420    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082849349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401428    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082880362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401437    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082908909Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 17:59:16.401446    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082981072Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 17:59:16.401460    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083071337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 17:59:16.401481    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083112046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 17:59:16.401492    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083141558Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 17:59:16.401593    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083173553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401606    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083204127Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 17:59:16.401620    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083234220Z" level=info msg="NRI interface is disabled by configuration."
	I0728 17:59:16.401628    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083428164Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 17:59:16.401637    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083514894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 17:59:16.401645    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083575557Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 17:59:16.401653    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083620565Z" level=info msg="containerd successfully booted in 0.020314s"
	I0728 17:59:16.401660    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.066266767Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 17:59:16.401668    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.253647977Z" level=info msg="Loading containers: start."
	I0728 17:59:16.401689    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.324491630Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 17:59:16.401703    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.382701703Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0728 17:59:16.401711    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.431702433Z" level=info msg="Loading containers: done."
	I0728 17:59:16.401721    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440864156Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 17:59:16.401730    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440919518Z" level=info msg="Daemon has completed initialization"
	I0728 17:59:16.401738    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461512437Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 17:59:16.401745    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461664145Z" level=info msg="API listen on [::]:2376"
	I0728 17:59:16.401751    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 systemd[1]: Started Docker Application Container Engine.
	I0728 17:59:16.401760    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260281303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401774    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260392108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401784    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260412572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401794    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260489352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401803    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276138579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401838    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276301037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401853    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276372584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401866    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276521849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401880    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.306891402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401894    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307066345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401904    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307094251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401914    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307168510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401924    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311048212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401938    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311102810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401948    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311112372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401958    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311392763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401968    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477710685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401977    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477915589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401987    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477973011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401997    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.478174177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402013    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494763986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402025    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494800644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402041    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494808461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402054    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494862529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402095    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502898043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402108    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502995270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402118    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503073968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402128    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503177666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402142    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514475802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402152    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514545542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402162    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514558720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402171    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514861602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402181    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352521512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402191    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352642496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402204    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352656093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402214    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352791637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402234    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466457350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402244    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466735785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402254    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466880396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402264    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.467238809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402274    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.588902278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402284    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589163604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402297    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589274541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402342    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589440546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402355    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647495237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402365    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647976971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402374    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648164904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402385    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648777321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402395    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931384339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402404    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931493404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402414    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931506590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402424    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931657800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402434    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162455309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402444    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162701812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402459    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162759021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402469    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.163278524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402481    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398231755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402491    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398332961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402502    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398346800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402512    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398679657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402523    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496031526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402533    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496097397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402626    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496109988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402640    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496427740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402650    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.034495755Z" level=info msg="shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	I0728 17:59:16.402661    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.034611180Z" level=info msg="ignoring event" container=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402671    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035089465Z" level=warning msg="cleaning up after shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	I0728 17:59:16.402679    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035158793Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.402690    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.111407350Z" level=info msg="ignoring event" container=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402700    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111763077Z" level=info msg="shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	I0728 17:59:16.402710    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111847732Z" level=warning msg="cleaning up after shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	I0728 17:59:16.402723    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111857207Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.402741    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.123414689Z" level=warning msg="cleanup warnings time=\"2024-07-29T00:58:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0728 17:59:16.402749    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.395458525Z" level=info msg="Processing signal 'terminated'"
	I0728 17:59:16.402760    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	I0728 17:59:16.402770    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448770229Z" level=info msg="shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	I0728 17:59:16.402780    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448865323Z" level=warning msg="cleaning up after shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	I0728 17:59:16.402788    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448875148Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.402799    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.449287739Z" level=info msg="ignoring event" container=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402813    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.499547099Z" level=info msg="ignoring event" container=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402822    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.499966665Z" level=info msg="shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	I0728 17:59:16.402832    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500680178Z" level=warning msg="cleaning up after shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	I0728 17:59:16.403003    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500689740Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403018    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.552833990Z" level=info msg="ignoring event" container=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403028    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553672267Z" level=info msg="shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	I0728 17:59:16.403038    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553743408Z" level=warning msg="cleaning up after shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	I0728 17:59:16.403046    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553752377Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403056    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553855742Z" level=info msg="shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	I0728 17:59:16.403066    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554337023Z" level=warning msg="cleaning up after shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	I0728 17:59:16.403081    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554382869Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403094    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.554596147Z" level=info msg="ignoring event" container=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403108    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.558112577Z" level=info msg="ignoring event" container=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403118    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558368677Z" level=info msg="shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	I0728 17:59:16.403129    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558831783Z" level=warning msg="cleaning up after shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	I0728 17:59:16.403140    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558877595Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403155    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.562511968Z" level=info msg="ignoring event" container=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403164    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562633349Z" level=info msg="shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	I0728 17:59:16.403175    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562699850Z" level=warning msg="cleaning up after shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	I0728 17:59:16.403183    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562708631Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403198    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.573772031Z" level=info msg="ignoring event" container=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403207    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574181868Z" level=info msg="shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	I0728 17:59:16.403218    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574256709Z" level=warning msg="cleaning up after shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	I0728 17:59:16.403226    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574265704Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403235    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584212617Z" level=info msg="shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	I0728 17:59:16.403247    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584332022Z" level=warning msg="cleaning up after shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	I0728 17:59:16.403255    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584390716Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403266    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589054926Z" level=info msg="ignoring event" container=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403278    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589169542Z" level=info msg="ignoring event" container=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403294    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589300211Z" level=info msg="ignoring event" container=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403304    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591696979Z" level=info msg="shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	I0728 17:59:16.403314    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591753738Z" level=warning msg="cleaning up after shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	I0728 17:59:16.403322    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591762049Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403333    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.592142540Z" level=info msg="ignoring event" container=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403342    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.593743099Z" level=info msg="shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	I0728 17:59:16.403356    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.594556393Z" level=info msg="ignoring event" container=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403368    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594895783Z" level=warning msg="cleaning up after shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	I0728 17:59:16.403376    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594940013Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403386    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594341936Z" level=info msg="shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	I0728 17:59:16.403396    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599531022Z" level=warning msg="cleaning up after shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	I0728 17:59:16.403405    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599564549Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403492    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594363171Z" level=info msg="shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	I0728 17:59:16.403510    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603697728Z" level=warning msg="cleaning up after shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	I0728 17:59:16.403517    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603706128Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403528    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1271]: time="2024-07-29T00:58:10.446248538Z" level=info msg="ignoring event" container=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403538    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446453571Z" level=info msg="shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	I0728 17:59:16.403548    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446483266Z" level=warning msg="cleaning up after shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	I0728 17:59:16.403555    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446489626Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403572    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.437850835Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924
	I0728 17:59:16.403584    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.461680643Z" level=info msg="ignoring event" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403593    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462134272Z" level=info msg="shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	I0728 17:59:16.403604    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462432578Z" level=warning msg="cleaning up after shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	I0728 17:59:16.403611    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462709085Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403621    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.480818399Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 17:59:16.403628    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481284133Z" level=info msg="Daemon shutdown complete"
	I0728 17:59:16.403638    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481351043Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 17:59:16.403648    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481513507Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 17:59:16.403658    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	I0728 17:59:16.403666    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	I0728 17:59:16.403673    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Consumed 2.317s CPU time.
	I0728 17:59:16.403686    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.403696    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 dockerd[3649]: time="2024-07-29T00:58:16.519764667Z" level=info msg="Starting up"
	I0728 17:59:16.403704    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 dockerd[3649]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0728 17:59:16.403716    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0728 17:59:16.403721    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0728 17:59:16.403735    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 systemd[1]: Failed to start Docker Application Container Engine.
	I0728 17:59:16.437925    2067 out.go:177] 
	W0728 17:59:16.458779    2067 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 00:57:13 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797365474Z" level=info msg="Starting up"
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797812498Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.799746278Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.817209839Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833006693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833027623Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833063048Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833073437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833127019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833187696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833331655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833366436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833378117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833385070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833441900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833582244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835042594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835101927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835241609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835284736Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835372957Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835438009Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837622113Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837721038Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837768434Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837808041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837840429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837936427Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838141537Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838308394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838347183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838384605Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838419232Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838451200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838482769Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838513376Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838546249Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838577148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838606171Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838634886Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838675799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838712449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838744137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838773905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838803063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838838392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838872381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838902742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838935507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838966734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838994870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839022479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839050538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839129561Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839170342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839201357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839229605Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839300959Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839344419Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839377180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839407452Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839436175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839464659Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839492819Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839668472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839754400Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839823157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839861606Z" level=info msg="containerd successfully booted in 0.023368s"
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.840311727Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.846796524Z" level=info msg="Loading containers: start."
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.931863378Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.016652031Z" level=info msg="Loading containers: done."
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023601347Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023702083Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056431503Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:15 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056529625Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.221309736Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:57:16 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222558264Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222867738Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222936309Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222951150Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:57:17 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:57:17 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:57:17 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251533872Z" level=info msg="Starting up"
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251992238Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.252592079Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=921
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.268000022Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283126898Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283245051Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283296543Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283329167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283372267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283410007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283528327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283565809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283595793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283624050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283661411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283760929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285373046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285426942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285565612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285609205Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285647249Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285681508Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285827566Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285877187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285910515Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285942139Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285973140Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286024088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286256555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286331375Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286365544Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286394955Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286424527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286453657Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286484741Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286516234Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286546601Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286579857Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286611348Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286641030Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286674739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286706453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286744971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286779178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286808354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286841128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286870616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286899451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286928600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286965950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286999059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287027761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287057255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287089564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287124670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287221056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287260008Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287333254Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287377987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287446465Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287477602Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287506315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287535151Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287565710Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287745237Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287832539Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287924952Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287968311Z" level=info msg="containerd successfully booted in 0.020373s"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.331881234Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.335683791Z" level=info msg="Loading containers: start."
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.404366470Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.461547560Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.503511121Z" level=info msg="Loading containers: done."
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521014736Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521083688Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.540963112Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:18 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.542092231Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.000429486Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001308281Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001458767Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001520154Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001554783Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:57:23 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:57:24 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:57:24 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:57:24 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.044513551Z" level=info msg="Starting up"
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045165961Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045779957Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1278
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.063819849Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078790454Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078861840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078909723Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078942873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078982590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079016511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079177290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079221517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079256669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079285006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079322780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079417461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.080975138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081019961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081189849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081230906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081268915Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081307449Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081514588Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081566132Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081599424Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081630245Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081660433Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081711134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081935254Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082003682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082071378Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082106832Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082141456Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082171351Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082199983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082230279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082259644Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082288397Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082316493Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082344152Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082389242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082427480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082458087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082487933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082526801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082561143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082590891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082620127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082660502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082695658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082725026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082756282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082785403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082815558Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082849349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082880362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082908909Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082981072Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083071337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083112046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083141558Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083173553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083204127Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083234220Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083428164Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083514894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083575557Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083620565Z" level=info msg="containerd successfully booted in 0.020314s"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.066266767Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.253647977Z" level=info msg="Loading containers: start."
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.324491630Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.382701703Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.431702433Z" level=info msg="Loading containers: done."
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440864156Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440919518Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461512437Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461664145Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:25 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260281303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260392108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260412572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260489352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276138579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276301037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276372584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276521849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.306891402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307066345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307094251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307168510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311048212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311102810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311112372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311392763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477710685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477915589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477973011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.478174177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494763986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494800644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494808461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494862529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502898043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502995270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503073968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503177666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514475802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514545542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514558720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514861602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352521512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352642496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352656093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352791637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466457350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466735785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466880396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.467238809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.588902278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589163604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589274541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589440546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647495237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647976971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648164904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648777321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931384339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931493404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931506590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931657800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162455309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162701812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162759021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.163278524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398231755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398332961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398346800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398679657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496031526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496097397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496109988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496427740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.034495755Z" level=info msg="shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.034611180Z" level=info msg="ignoring event" container=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035089465Z" level=warning msg="cleaning up after shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035158793Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.111407350Z" level=info msg="ignoring event" container=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111763077Z" level=info msg="shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111847732Z" level=warning msg="cleaning up after shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111857207Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.123414689Z" level=warning msg="cleanup warnings time=\"2024-07-29T00:58:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.395458525Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:58:05 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448770229Z" level=info msg="shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448865323Z" level=warning msg="cleaning up after shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448875148Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.449287739Z" level=info msg="ignoring event" container=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.499547099Z" level=info msg="ignoring event" container=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.499966665Z" level=info msg="shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500680178Z" level=warning msg="cleaning up after shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500689740Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.552833990Z" level=info msg="ignoring event" container=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553672267Z" level=info msg="shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553743408Z" level=warning msg="cleaning up after shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553752377Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553855742Z" level=info msg="shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554337023Z" level=warning msg="cleaning up after shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554382869Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.554596147Z" level=info msg="ignoring event" container=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.558112577Z" level=info msg="ignoring event" container=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558368677Z" level=info msg="shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558831783Z" level=warning msg="cleaning up after shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558877595Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.562511968Z" level=info msg="ignoring event" container=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562633349Z" level=info msg="shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562699850Z" level=warning msg="cleaning up after shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562708631Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.573772031Z" level=info msg="ignoring event" container=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574181868Z" level=info msg="shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574256709Z" level=warning msg="cleaning up after shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574265704Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584212617Z" level=info msg="shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584332022Z" level=warning msg="cleaning up after shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584390716Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589054926Z" level=info msg="ignoring event" container=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589169542Z" level=info msg="ignoring event" container=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589300211Z" level=info msg="ignoring event" container=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591696979Z" level=info msg="shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591753738Z" level=warning msg="cleaning up after shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591762049Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.592142540Z" level=info msg="ignoring event" container=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.593743099Z" level=info msg="shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.594556393Z" level=info msg="ignoring event" container=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594895783Z" level=warning msg="cleaning up after shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594940013Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594341936Z" level=info msg="shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599531022Z" level=warning msg="cleaning up after shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599564549Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594363171Z" level=info msg="shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603697728Z" level=warning msg="cleaning up after shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603706128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1271]: time="2024-07-29T00:58:10.446248538Z" level=info msg="ignoring event" container=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446453571Z" level=info msg="shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446483266Z" level=warning msg="cleaning up after shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446489626Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.437850835Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.461680643Z" level=info msg="ignoring event" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462134272Z" level=info msg="shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462432578Z" level=warning msg="cleaning up after shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462709085Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.480818399Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481284133Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481351043Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481513507Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:58:16 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Consumed 2.317s CPU time.
	Jul 29 00:58:16 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:58:16 functional-596000 dockerd[3649]: time="2024-07-29T00:58:16.519764667Z" level=info msg="Starting up"
	Jul 29 00:59:16 functional-596000 dockerd[3649]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 00:59:16 functional-596000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0728 17:59:16.459445    2067 out.go:239] * 
	W0728 17:59:16.460660    2067 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 17:59:16.543445    2067 out.go:177] 
	
	
	==> Docker <==
	Jul 29 01:16:19 functional-596000 dockerd[7659]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="error getting RW layer size for container ID '019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID '019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf'"
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="error getting RW layer size for container ID 'c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924'"
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="error getting RW layer size for container ID '411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID '411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb'"
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="error getting RW layer size for container ID 'cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:16:19 functional-596000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634'"
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="error getting RW layer size for container ID '15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="error getting RW layer size for container ID 'dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID '15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad'"
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5'"
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="error getting RW layer size for container ID 'fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="error getting RW layer size for container ID '1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:16:19 functional-596000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6'"
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed'"
	Jul 29 01:16:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:16:19Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 29 01:16:19 functional-596000 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 29 01:16:19 functional-596000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Jul 29 01:16:19 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 01:16:19 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:16:19 functional-596000 dockerd[7950]: time="2024-07-29T01:16:19.443048535Z" level=info msg="Starting up"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-29T01:16:21Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.071501] systemd-fstab-generator[907]: Ignoring "noauto" option for root device
	[  +2.464238] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +0.103266] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.116452] systemd-fstab-generator[1145]: Ignoring "noauto" option for root device
	[  +0.130252] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +3.974695] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	[  +0.052634] kauditd_printk_skb: 158 callbacks suppressed
	[  +2.632602] systemd-fstab-generator[1511]: Ignoring "noauto" option for root device
	[  +4.717931] systemd-fstab-generator[1694]: Ignoring "noauto" option for root device
	[  +0.052232] kauditd_printk_skb: 70 callbacks suppressed
	[  +4.965900] systemd-fstab-generator[2101]: Ignoring "noauto" option for root device
	[  +0.068473] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.556217] systemd-fstab-generator[2344]: Ignoring "noauto" option for root device
	[  +0.144175] kauditd_printk_skb: 12 callbacks suppressed
	[ +10.927376] kauditd_printk_skb: 98 callbacks suppressed
	[Jul29 00:58] systemd-fstab-generator[3180]: Ignoring "noauto" option for root device
	[  +0.280018] systemd-fstab-generator[3216]: Ignoring "noauto" option for root device
	[  +0.136220] systemd-fstab-generator[3228]: Ignoring "noauto" option for root device
	[  +0.135284] systemd-fstab-generator[3242]: Ignoring "noauto" option for root device
	[  +5.159757] kauditd_printk_skb: 101 callbacks suppressed
	[Jul29 01:02] clocksource: timekeeping watchdog on CPU0: Marking clocksource 'tsc' as unstable because the skew is too large:
	[  +0.000049] clocksource:                       'hpet' wd_now: b6c345a4 wd_last: b5ef4422 mask: ffffffff
	[  +0.000044] clocksource:                       'tsc' cs_now: 587809d696b cs_last: 586789366bd mask: ffffffffffffffff
	[  +0.000172] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
	[  +0.000295] clocksource: Checking clocksource tsc synchronization from CPU 0.
	
	
	==> kernel <==
	 01:17:19 up 20 min,  0 users,  load average: 0.00, 0.00, 0.01
	Linux functional-596000 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 01:17:14 functional-596000 kubelet[2108]: I0729 01:17:14.552346    2108 status_manager.go:853] "Failed to get status for pod" podUID="471ce4342a500a995eaa994abbd56071" pod="kube-system/kube-apiserver-functional-596000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-596000\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:17:17 functional-596000 kubelet[2108]: E0729 01:17:17.543349    2108 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 19m13.530927953s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 29 01:17:17 functional-596000 kubelet[2108]: E0729 01:17:17.647459    2108 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-596000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-596000?resourceVersion=0&timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:17:17 functional-596000 kubelet[2108]: E0729 01:17:17.649066    2108 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-596000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-596000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:17:17 functional-596000 kubelet[2108]: E0729 01:17:17.650228    2108 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-596000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-596000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:17:17 functional-596000 kubelet[2108]: E0729 01:17:17.651287    2108 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-596000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-596000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:17:17 functional-596000 kubelet[2108]: E0729 01:17:17.652676    2108 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-596000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-596000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:17:17 functional-596000 kubelet[2108]: E0729 01:17:17.652806    2108 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: E0729 01:17:19.469392    2108 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: E0729 01:17:19.469501    2108 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: E0729 01:17:19.469544    2108 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: E0729 01:17:19.469640    2108 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: I0729 01:17:19.469663    2108 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: E0729 01:17:19.469801    2108 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: E0729 01:17:19.469842    2108 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: E0729 01:17:19.470051    2108 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: E0729 01:17:19.470081    2108 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: E0729 01:17:19.470120    2108 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: E0729 01:17:19.470233    2108 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: E0729 01:17:19.470295    2108 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: E0729 01:17:19.470322    2108 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: E0729 01:17:19.471502    2108 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: E0729 01:17:19.471679    2108 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: E0729 01:17:19.472147    2108 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jul 29 01:17:19 functional-596000 kubelet[2108]: E0729 01:17:19.728148    2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-596000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 18:16:19.166954    2583 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:16:19.185012    2583 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:16:19.199162    2583 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:16:19.213294    2583 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:16:19.226824    2583 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:16:19.241288    2583 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:16:19.256161    2583 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:16:19.271040    2583 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-596000 -n functional-596000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-596000 -n functional-596000: exit status 2 (152.272528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-596000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (120.27s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (120.45s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-596000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-596000 get pods: exit status 1 (1.939606396s)

                                                
                                                
** stderr ** 
	E0728 18:17:21.589066    2603 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	E0728 18:17:21.691052    2603 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	E0728 18:17:21.791113    2603 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	E0728 18:17:21.891849    2603 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	E0728 18:17:21.993725    2603 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	The connection to the server 192.169.0.4:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-596000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-596000 -n functional-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-596000 -n functional-596000: exit status 2 (151.743513ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-596000 logs -n 25: (1m58.150696805s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                              Args                              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| pause   | nospam-292000 --log_dir                                        | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 |                   |         |         |                     |                     |
	|         | pause                                                          |                   |         |         |                     |                     |
	| unpause | nospam-292000 --log_dir                                        | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| unpause | nospam-292000 --log_dir                                        | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| unpause | nospam-292000 --log_dir                                        | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| stop    | nospam-292000 --log_dir                                        | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:54 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| stop    | nospam-292000 --log_dir                                        | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:54 PDT | 28 Jul 24 17:55 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| stop    | nospam-292000 --log_dir                                        | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:55 PDT | 28 Jul 24 17:56 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| delete  | -p nospam-292000                                               | nospam-292000     | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:57 PDT |
	| start   | -p functional-596000                                           | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:58 PDT |
	|         | --memory=4000                                                  |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                          |                   |         |         |                     |                     |
	|         | --wait=all --driver=hyperkit                                   |                   |         |         |                     |                     |
	| start   | -p functional-596000                                           | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 17:58 PDT |                     |
	|         | --alsologtostderr -v=8                                         |                   |         |         |                     |                     |
	| cache   | functional-596000 cache add                                    | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:03 PDT | 28 Jul 24 18:05 PDT |
	|         | registry.k8s.io/pause:3.1                                      |                   |         |         |                     |                     |
	| cache   | functional-596000 cache add                                    | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:05 PDT | 28 Jul 24 18:07 PDT |
	|         | registry.k8s.io/pause:3.3                                      |                   |         |         |                     |                     |
	| cache   | functional-596000 cache add                                    | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:07 PDT | 28 Jul 24 18:09 PDT |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | functional-596000 cache add                                    | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:09 PDT | 28 Jul 24 18:10 PDT |
	|         | minikube-local-cache-test:functional-596000                    |                   |         |         |                     |                     |
	| cache   | functional-596000 cache delete                                 | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:10 PDT | 28 Jul 24 18:10 PDT |
	|         | minikube-local-cache-test:functional-596000                    |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.33.1 | 28 Jul 24 18:10 PDT | 28 Jul 24 18:10 PDT |
	|         | registry.k8s.io/pause:3.3                                      |                   |         |         |                     |                     |
	| cache   | list                                                           | minikube          | jenkins | v1.33.1 | 28 Jul 24 18:10 PDT | 28 Jul 24 18:10 PDT |
	| ssh     | functional-596000 ssh sudo                                     | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:10 PDT |                     |
	|         | crictl images                                                  |                   |         |         |                     |                     |
	| ssh     | functional-596000                                              | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:10 PDT |                     |
	|         | ssh sudo docker rmi                                            |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| ssh     | functional-596000 ssh                                          | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:11 PDT |                     |
	|         | sudo crictl inspecti                                           |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | functional-596000 cache reload                                 | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:11 PDT | 28 Jul 24 18:13 PDT |
	| ssh     | functional-596000 ssh                                          | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:13 PDT |                     |
	|         | sudo crictl inspecti                                           |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.33.1 | 28 Jul 24 18:13 PDT | 28 Jul 24 18:13 PDT |
	|         | registry.k8s.io/pause:3.1                                      |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.33.1 | 28 Jul 24 18:13 PDT | 28 Jul 24 18:13 PDT |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| kubectl | functional-596000 kubectl --                                   | functional-596000 | jenkins | v1.33.1 | 28 Jul 24 18:15 PDT |                     |
	|         | --context functional-596000                                    |                   |         |         |                     |                     |
	|         | get pods                                                       |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/28 17:58:03
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 17:58:03.181908    2067 out.go:291] Setting OutFile to fd 1 ...
	I0728 17:58:03.182088    2067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:58:03.182094    2067 out.go:304] Setting ErrFile to fd 2...
	I0728 17:58:03.182098    2067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:58:03.182279    2067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 17:58:03.183681    2067 out.go:298] Setting JSON to false
	I0728 17:58:03.206318    2067 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1654,"bootTime":1722213029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 17:58:03.206416    2067 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 17:58:03.227676    2067 out.go:177] * [functional-596000] minikube v1.33.1 on Darwin 14.5
	I0728 17:58:03.269722    2067 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 17:58:03.269783    2067 notify.go:220] Checking for updates...
	I0728 17:58:03.312443    2067 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 17:58:03.333527    2067 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 17:58:03.354627    2067 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 17:58:03.375824    2067 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 17:58:03.396566    2067 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 17:58:03.417974    2067 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 17:58:03.418146    2067 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 17:58:03.418798    2067 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:58:03.418872    2067 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 17:58:03.428211    2067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50175
	I0728 17:58:03.428568    2067 main.go:141] libmachine: () Calling .GetVersion
	I0728 17:58:03.428964    2067 main.go:141] libmachine: Using API Version  1
	I0728 17:58:03.428979    2067 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 17:58:03.429182    2067 main.go:141] libmachine: () Calling .GetMachineName
	I0728 17:58:03.429300    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:03.457784    2067 out.go:177] * Using the hyperkit driver based on existing profile
	I0728 17:58:03.499269    2067 start.go:297] selected driver: hyperkit
	I0728 17:58:03.499285    2067 start.go:901] validating driver "hyperkit" against &{Name:functional-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:58:03.499388    2067 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 17:58:03.499488    2067 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:58:03.499604    2067 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 17:58:03.508339    2067 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 17:58:03.512503    2067 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:58:03.512529    2067 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 17:58:03.515340    2067 cni.go:84] Creating CNI manager for ""
	I0728 17:58:03.515390    2067 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 17:58:03.515469    2067 start.go:340] cluster config:
	{Name:functional-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-596000 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:58:03.515565    2067 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:58:03.573559    2067 out.go:177] * Starting "functional-596000" primary control-plane node in "functional-596000" cluster
	I0728 17:58:03.610472    2067 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 17:58:03.610521    2067 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 17:58:03.610545    2067 cache.go:56] Caching tarball of preloaded images
	I0728 17:58:03.610741    2067 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 17:58:03.610759    2067 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 17:58:03.610882    2067 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/config.json ...
	I0728 17:58:03.611579    2067 start.go:360] acquireMachinesLock for functional-596000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 17:58:03.611656    2067 start.go:364] duration metric: took 61.959µs to acquireMachinesLock for "functional-596000"
	I0728 17:58:03.611681    2067 start.go:96] Skipping create...Using existing machine configuration
	I0728 17:58:03.611696    2067 fix.go:54] fixHost starting: 
	I0728 17:58:03.612004    2067 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:58:03.612033    2067 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 17:58:03.621321    2067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50177
	I0728 17:58:03.621639    2067 main.go:141] libmachine: () Calling .GetVersion
	I0728 17:58:03.622002    2067 main.go:141] libmachine: Using API Version  1
	I0728 17:58:03.622022    2067 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 17:58:03.622230    2067 main.go:141] libmachine: () Calling .GetMachineName
	I0728 17:58:03.622342    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:03.622436    2067 main.go:141] libmachine: (functional-596000) Calling .GetState
	I0728 17:58:03.622567    2067 main.go:141] libmachine: (functional-596000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 17:58:03.622651    2067 main.go:141] libmachine: (functional-596000) DBG | hyperkit pid from json: 2051
	I0728 17:58:03.623593    2067 fix.go:112] recreateIfNeeded on functional-596000: state=Running err=<nil>
	W0728 17:58:03.623608    2067 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 17:58:03.644584    2067 out.go:177] * Updating the running hyperkit "functional-596000" VM ...
	I0728 17:58:03.686410    2067 machine.go:94] provisionDockerMachine start ...
	I0728 17:58:03.686443    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:03.686748    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.686992    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.687220    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.687442    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.687672    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.687922    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:03.688298    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:03.688318    2067 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 17:58:03.737887    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-596000
	
	I0728 17:58:03.737901    2067 main.go:141] libmachine: (functional-596000) Calling .GetMachineName
	I0728 17:58:03.738050    2067 buildroot.go:166] provisioning hostname "functional-596000"
	I0728 17:58:03.738062    2067 main.go:141] libmachine: (functional-596000) Calling .GetMachineName
	I0728 17:58:03.738158    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.738247    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.738335    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.738433    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.738522    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.738660    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:03.738789    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:03.738804    2067 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-596000 && echo "functional-596000" | sudo tee /etc/hostname
	I0728 17:58:03.799001    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-596000
	
	I0728 17:58:03.799026    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.799176    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.799262    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.799342    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.799457    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.799594    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:03.799743    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:03.799755    2067 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-596000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-596000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-596000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 17:58:03.848940    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 17:58:03.848963    2067 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 17:58:03.848979    2067 buildroot.go:174] setting up certificates
	I0728 17:58:03.848994    2067 provision.go:84] configureAuth start
	I0728 17:58:03.849001    2067 main.go:141] libmachine: (functional-596000) Calling .GetMachineName
	I0728 17:58:03.849120    2067 main.go:141] libmachine: (functional-596000) Calling .GetIP
	I0728 17:58:03.849210    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.849295    2067 provision.go:143] copyHostCerts
	I0728 17:58:03.849323    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 17:58:03.849389    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 17:58:03.849397    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 17:58:03.849587    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 17:58:03.849823    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 17:58:03.849865    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 17:58:03.849873    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 17:58:03.850017    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 17:58:03.850186    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 17:58:03.850225    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 17:58:03.850230    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 17:58:03.850308    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 17:58:03.850449    2067 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.functional-596000 san=[127.0.0.1 192.169.0.4 functional-596000 localhost minikube]
	I0728 17:58:03.967853    2067 provision.go:177] copyRemoteCerts
	I0728 17:58:03.967921    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 17:58:03.967939    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:03.968094    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:03.968192    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:03.968299    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:03.968393    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.001708    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 17:58:04.001790    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 17:58:04.022827    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 17:58:04.022891    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0728 17:58:04.042748    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 17:58:04.042810    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 17:58:04.062503    2067 provision.go:87] duration metric: took 213.493856ms to configureAuth
	I0728 17:58:04.062518    2067 buildroot.go:189] setting minikube options for container-runtime
	I0728 17:58:04.062657    2067 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 17:58:04.062674    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.062814    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.062907    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.062999    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.063076    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.063159    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.063261    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.063390    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.063398    2067 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 17:58:04.115857    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 17:58:04.115869    2067 buildroot.go:70] root file system type: tmpfs
	I0728 17:58:04.115942    2067 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 17:58:04.115956    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.116086    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.116177    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.116266    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.116359    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.116490    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.116628    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.116676    2067 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 17:58:04.180807    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 17:58:04.180831    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.180961    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.181052    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.181141    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.181233    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.181369    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.181514    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.181526    2067 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 17:58:04.236936    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 17:58:04.236950    2067 machine.go:97] duration metric: took 550.516869ms to provisionDockerMachine
	I0728 17:58:04.236962    2067 start.go:293] postStartSetup for "functional-596000" (driver="hyperkit")
	I0728 17:58:04.236969    2067 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 17:58:04.236980    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.237151    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 17:58:04.237167    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.237259    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.237356    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.237450    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.237524    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.269248    2067 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 17:58:04.272370    2067 command_runner.go:130] > NAME=Buildroot
	I0728 17:58:04.272378    2067 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0728 17:58:04.272381    2067 command_runner.go:130] > ID=buildroot
	I0728 17:58:04.272385    2067 command_runner.go:130] > VERSION_ID=2023.02.9
	I0728 17:58:04.272389    2067 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0728 17:58:04.272475    2067 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 17:58:04.272491    2067 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 17:58:04.272591    2067 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 17:58:04.272782    2067 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 17:58:04.272789    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /etc/ssl/certs/15332.pem
	I0728 17:58:04.272981    2067 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/test/nested/copy/1533/hosts -> hosts in /etc/test/nested/copy/1533
	I0728 17:58:04.272987    2067 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/test/nested/copy/1533/hosts -> /etc/test/nested/copy/1533/hosts
	I0728 17:58:04.273049    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1533
	I0728 17:58:04.281301    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 17:58:04.301144    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/test/nested/copy/1533/hosts --> /etc/test/nested/copy/1533/hosts (40 bytes)
	I0728 17:58:04.321194    2067 start.go:296] duration metric: took 84.223294ms for postStartSetup
	I0728 17:58:04.321219    2067 fix.go:56] duration metric: took 709.52621ms for fixHost
	I0728 17:58:04.321235    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.321378    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.321458    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.321552    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.321634    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.321767    2067 main.go:141] libmachine: Using SSH client type: native
	I0728 17:58:04.321915    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1c5d0c0] 0x1c5fe20 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0728 17:58:04.321922    2067 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0728 17:58:04.372672    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722214684.480661733
	
	I0728 17:58:04.372686    2067 fix.go:216] guest clock: 1722214684.480661733
	I0728 17:58:04.372691    2067 fix.go:229] Guest: 2024-07-28 17:58:04.480661733 -0700 PDT Remote: 2024-07-28 17:58:04.321226 -0700 PDT m=+1.173910037 (delta=159.435733ms)
	I0728 17:58:04.372708    2067 fix.go:200] guest clock delta is within tolerance: 159.435733ms
	I0728 17:58:04.372712    2067 start.go:83] releasing machines lock for "functional-596000", held for 761.044153ms
	I0728 17:58:04.372731    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.372854    2067 main.go:141] libmachine: (functional-596000) Calling .GetIP
	I0728 17:58:04.372965    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.373253    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.373372    2067 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 17:58:04.373450    2067 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 17:58:04.373485    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.373513    2067 ssh_runner.go:195] Run: cat /version.json
	I0728 17:58:04.373523    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
	I0728 17:58:04.373581    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.373615    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
	I0728 17:58:04.373688    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.373706    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
	I0728 17:58:04.373784    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.373796    2067 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
	I0728 17:58:04.373868    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.373891    2067 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
	I0728 17:58:04.444486    2067 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0728 17:58:04.445070    2067 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0728 17:58:04.445228    2067 ssh_runner.go:195] Run: systemctl --version
	I0728 17:58:04.449759    2067 command_runner.go:130] > systemd 252 (252)
	I0728 17:58:04.449776    2067 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0728 17:58:04.450022    2067 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0728 17:58:04.454258    2067 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0728 17:58:04.454279    2067 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 17:58:04.454319    2067 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 17:58:04.462388    2067 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0728 17:58:04.462398    2067 start.go:495] detecting cgroup driver to use...
	I0728 17:58:04.462514    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 17:58:04.477917    2067 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0728 17:58:04.478151    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 17:58:04.487863    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 17:58:04.497357    2067 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 17:58:04.497404    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 17:58:04.507132    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 17:58:04.516475    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 17:58:04.526165    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 17:58:04.535504    2067 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 17:58:04.545511    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 17:58:04.554731    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 17:58:04.563973    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 17:58:04.573675    2067 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 17:58:04.582020    2067 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0728 17:58:04.582227    2067 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 17:58:04.591135    2067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 17:58:04.729887    2067 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 17:58:04.749030    2067 start.go:495] detecting cgroup driver to use...
	I0728 17:58:04.749107    2067 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 17:58:04.763070    2067 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0728 17:58:04.763645    2067 command_runner.go:130] > [Unit]
	I0728 17:58:04.763655    2067 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 17:58:04.763659    2067 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 17:58:04.763664    2067 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0728 17:58:04.763668    2067 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0728 17:58:04.763673    2067 command_runner.go:130] > StartLimitBurst=3
	I0728 17:58:04.763676    2067 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 17:58:04.763680    2067 command_runner.go:130] > [Service]
	I0728 17:58:04.763686    2067 command_runner.go:130] > Type=notify
	I0728 17:58:04.763691    2067 command_runner.go:130] > Restart=on-failure
	I0728 17:58:04.763696    2067 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 17:58:04.763711    2067 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 17:58:04.763718    2067 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 17:58:04.763723    2067 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 17:58:04.763729    2067 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 17:58:04.763734    2067 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 17:58:04.763741    2067 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 17:58:04.763754    2067 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 17:58:04.763760    2067 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 17:58:04.763763    2067 command_runner.go:130] > ExecStart=
	I0728 17:58:04.763777    2067 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0728 17:58:04.763782    2067 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 17:58:04.763788    2067 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 17:58:04.763795    2067 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 17:58:04.763798    2067 command_runner.go:130] > LimitNOFILE=infinity
	I0728 17:58:04.763802    2067 command_runner.go:130] > LimitNPROC=infinity
	I0728 17:58:04.763807    2067 command_runner.go:130] > LimitCORE=infinity
	I0728 17:58:04.763811    2067 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 17:58:04.763815    2067 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 17:58:04.763824    2067 command_runner.go:130] > TasksMax=infinity
	I0728 17:58:04.763828    2067 command_runner.go:130] > TimeoutStartSec=0
	I0728 17:58:04.763833    2067 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 17:58:04.763837    2067 command_runner.go:130] > Delegate=yes
	I0728 17:58:04.763842    2067 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 17:58:04.763846    2067 command_runner.go:130] > KillMode=process
	I0728 17:58:04.763849    2067 command_runner.go:130] > [Install]
	I0728 17:58:04.763857    2067 command_runner.go:130] > WantedBy=multi-user.target
	I0728 17:58:04.763963    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 17:58:04.775171    2067 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 17:58:04.803670    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 17:58:04.815918    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 17:58:04.827728    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 17:58:04.842925    2067 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 17:58:04.843170    2067 ssh_runner.go:195] Run: which cri-dockerd
	I0728 17:58:04.846059    2067 command_runner.go:130] > /usr/bin/cri-dockerd
	I0728 17:58:04.846245    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 17:58:04.854364    2067 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 17:58:04.868292    2067 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 17:58:05.006256    2067 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 17:58:05.135902    2067 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 17:58:05.135971    2067 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 17:58:05.150351    2067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 17:58:05.274841    2067 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 17:59:16.388765    2067 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0728 17:59:16.388780    2067 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0728 17:59:16.388791    2067 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.113588859s)
	I0728 17:59:16.388851    2067 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0728 17:59:16.398150    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.398166    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797365474Z" level=info msg="Starting up"
	I0728 17:59:16.398196    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797812498Z" level=info msg="containerd not running, starting managed containerd"
	I0728 17:59:16.398214    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.799746278Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	I0728 17:59:16.398223    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.817209839Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 17:59:16.398235    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833006693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 17:59:16.398246    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833027623Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 17:59:16.398255    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833063048Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 17:59:16.398264    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833073437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398274    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833127019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398283    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833187696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398302    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833331655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398312    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833366436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398323    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833378117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398332    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833385070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398342    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833441900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398350    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833582244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398364    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835042594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398374    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835101927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.398432    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835241609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.398446    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835284736Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 17:59:16.398456    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835372957Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 17:59:16.398464    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835438009Z" level=info msg="metadata content store policy set" policy=shared
	I0728 17:59:16.398472    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837622113Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 17:59:16.398481    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837721038Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 17:59:16.398490    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837768434Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 17:59:16.398500    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837808041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 17:59:16.398509    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837840429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 17:59:16.398518    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837936427Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 17:59:16.398527    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838141537Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.398536    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838308394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.398544    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838347183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 17:59:16.398554    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838384605Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 17:59:16.398566    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838419232Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398576    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838451200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398585    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838482769Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398594    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838513376Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398604    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838546249Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398614    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838577148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398624    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838606171Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398900    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838634886Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.398913    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838675799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398921    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838712449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398929    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838744137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398938    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838773905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398946    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838803063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398955    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838838392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398963    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838872381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398971    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838902742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398980    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838935507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.398994    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838966734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399003    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838994870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399011    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839022479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399019    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839050538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399028    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839129561Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 17:59:16.399037    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839170342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399045    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839201357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399054    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839229605Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 17:59:16.399063    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839300959Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 17:59:16.399075    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839344419Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 17:59:16.399084    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839377180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 17:59:16.399288    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839407452Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 17:59:16.399301    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839436175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.399321    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839464659Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 17:59:16.399330    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839492819Z" level=info msg="NRI interface is disabled by configuration."
	I0728 17:59:16.399339    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839668472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 17:59:16.399347    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839754400Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 17:59:16.399355    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839823157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 17:59:16.399363    2067 command_runner.go:130] > Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839861606Z" level=info msg="containerd successfully booted in 0.023368s"
	I0728 17:59:16.399371    2067 command_runner.go:130] > Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.840311727Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 17:59:16.399378    2067 command_runner.go:130] > Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.846796524Z" level=info msg="Loading containers: start."
	I0728 17:59:16.399399    2067 command_runner.go:130] > Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.931863378Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 17:59:16.399408    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.016652031Z" level=info msg="Loading containers: done."
	I0728 17:59:16.399429    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023601347Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 17:59:16.399457    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023702083Z" level=info msg="Daemon has completed initialization"
	I0728 17:59:16.399464    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056431503Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 17:59:16.399492    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 systemd[1]: Started Docker Application Container Engine.
	I0728 17:59:16.399501    2067 command_runner.go:130] > Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056529625Z" level=info msg="API listen on [::]:2376"
	I0728 17:59:16.399507    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.221309736Z" level=info msg="Processing signal 'terminated'"
	I0728 17:59:16.399513    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	I0728 17:59:16.399522    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222558264Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 17:59:16.399528    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222867738Z" level=info msg="Daemon shutdown complete"
	I0728 17:59:16.399545    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222936309Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 17:59:16.399553    2067 command_runner.go:130] > Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222951150Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 17:59:16.399559    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	I0728 17:59:16.399564    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	I0728 17:59:16.399574    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.399581    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251533872Z" level=info msg="Starting up"
	I0728 17:59:16.399696    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251992238Z" level=info msg="containerd not running, starting managed containerd"
	I0728 17:59:16.399709    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.252592079Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=921
	I0728 17:59:16.399718    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.268000022Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 17:59:16.399726    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283126898Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 17:59:16.399735    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283245051Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 17:59:16.399744    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283296543Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 17:59:16.399753    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283329167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399767    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283372267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399777    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283410007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399792    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283528327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399801    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283565809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399812    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283595793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399821    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283624050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399831    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283661411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399840    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283760929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399853    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285373046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399863    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285426942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.399876    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285565612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.399910    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285609205Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 17:59:16.399925    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285647249Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 17:59:16.399934    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285681508Z" level=info msg="metadata content store policy set" policy=shared
	I0728 17:59:16.399943    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285827566Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 17:59:16.399952    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285877187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 17:59:16.399961    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285910515Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 17:59:16.399969    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285942139Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 17:59:16.399980    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285973140Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 17:59:16.399991    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286024088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 17:59:16.400000    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286256555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.400009    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286331375Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.400021    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286365544Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 17:59:16.400031    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286394955Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 17:59:16.400040    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286424527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400050    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286453657Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400059    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286484741Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400068    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286516234Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400077    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286546601Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400086    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286579857Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400096    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286611348Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400105    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286641030Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.400173    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286674739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400185    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286706453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400194    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286744971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400203    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286779178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400216    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286808354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400225    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286841128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400234    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286870616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400243    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286899451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400251    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286928600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400260    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286965950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400269    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286999059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400278    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287027761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400286    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287057255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400295    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287089564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 17:59:16.400304    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287124670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400312    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287221056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400321    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287260008Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 17:59:16.400332    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287333254Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 17:59:16.400344    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287377987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 17:59:16.400354    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287446465Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 17:59:16.400365    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287477602Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 17:59:16.400375    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287506315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.400543    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287535151Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 17:59:16.400553    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287565710Z" level=info msg="NRI interface is disabled by configuration."
	I0728 17:59:16.400561    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287745237Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 17:59:16.400572    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287832539Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 17:59:16.400580    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287924952Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 17:59:16.400588    2067 command_runner.go:130] > Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287968311Z" level=info msg="containerd successfully booted in 0.020373s"
	I0728 17:59:16.400596    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.331881234Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 17:59:16.400604    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.335683791Z" level=info msg="Loading containers: start."
	I0728 17:59:16.400623    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.404366470Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 17:59:16.400634    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.461547560Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0728 17:59:16.400642    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.503511121Z" level=info msg="Loading containers: done."
	I0728 17:59:16.400652    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521014736Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 17:59:16.400659    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521083688Z" level=info msg="Daemon has completed initialization"
	I0728 17:59:16.400669    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.540963112Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 17:59:16.400676    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 systemd[1]: Started Docker Application Container Engine.
	I0728 17:59:16.400683    2067 command_runner.go:130] > Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.542092231Z" level=info msg="API listen on [::]:2376"
	I0728 17:59:16.400691    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.000429486Z" level=info msg="Processing signal 'terminated'"
	I0728 17:59:16.400701    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001308281Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 17:59:16.400716    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001458767Z" level=info msg="Daemon shutdown complete"
	I0728 17:59:16.400730    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001520154Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 17:59:16.400739    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001554783Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 17:59:16.400746    2067 command_runner.go:130] > Jul 29 00:57:23 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	I0728 17:59:16.400751    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	I0728 17:59:16.400757    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	I0728 17:59:16.400763    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.400770    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.044513551Z" level=info msg="Starting up"
	I0728 17:59:16.400830    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045165961Z" level=info msg="containerd not running, starting managed containerd"
	I0728 17:59:16.400840    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045779957Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1278
	I0728 17:59:16.400849    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.063819849Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 17:59:16.400859    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078790454Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 17:59:16.400881    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078861840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 17:59:16.400890    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078909723Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 17:59:16.400899    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078942873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400909    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078982590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.400918    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079016511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400934    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079177290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.400942    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079221517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400956    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079256669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.400968    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079285006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400977    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079322780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.400989    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079417461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.401003    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.080975138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.401012    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081019961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 17:59:16.401028    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081189849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 17:59:16.401037    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081230906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 17:59:16.401046    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081268915Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 17:59:16.401054    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081307449Z" level=info msg="metadata content store policy set" policy=shared
	I0728 17:59:16.401063    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081514588Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 17:59:16.401072    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081566132Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 17:59:16.401081    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081599424Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 17:59:16.401092    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081630245Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 17:59:16.401101    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081660433Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 17:59:16.401110    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081711134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 17:59:16.401119    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081935254Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.401131    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082003682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 17:59:16.401140    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082071378Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 17:59:16.401150    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082106832Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 17:59:16.401160    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082141456Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401169    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082171351Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401178    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082199983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401199    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082230279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401209    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082259644Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401218    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082288397Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401228    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082316493Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401241    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082344152Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 17:59:16.401289    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082389242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401303    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082427480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401312    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082458087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401322    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082487933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401330    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082526801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401339    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082561143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401348    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082590891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401357    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082620127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401366    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082660502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401376    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082695658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401385    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082725026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401394    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082756282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401403    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082785403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401412    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082815558Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 17:59:16.401420    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082849349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401428    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082880362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401437    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082908909Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 17:59:16.401446    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082981072Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 17:59:16.401460    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083071337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 17:59:16.401481    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083112046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 17:59:16.401492    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083141558Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 17:59:16.401593    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083173553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 17:59:16.401606    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083204127Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 17:59:16.401620    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083234220Z" level=info msg="NRI interface is disabled by configuration."
	I0728 17:59:16.401628    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083428164Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 17:59:16.401637    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083514894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 17:59:16.401645    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083575557Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 17:59:16.401653    2067 command_runner.go:130] > Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083620565Z" level=info msg="containerd successfully booted in 0.020314s"
	I0728 17:59:16.401660    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.066266767Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 17:59:16.401668    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.253647977Z" level=info msg="Loading containers: start."
	I0728 17:59:16.401689    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.324491630Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 17:59:16.401703    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.382701703Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0728 17:59:16.401711    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.431702433Z" level=info msg="Loading containers: done."
	I0728 17:59:16.401721    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440864156Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 17:59:16.401730    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440919518Z" level=info msg="Daemon has completed initialization"
	I0728 17:59:16.401738    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461512437Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 17:59:16.401745    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461664145Z" level=info msg="API listen on [::]:2376"
	I0728 17:59:16.401751    2067 command_runner.go:130] > Jul 29 00:57:25 functional-596000 systemd[1]: Started Docker Application Container Engine.
	I0728 17:59:16.401760    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260281303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401774    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260392108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401784    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260412572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401794    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260489352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401803    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276138579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401838    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276301037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401853    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276372584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401866    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276521849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401880    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.306891402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401894    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307066345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401904    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307094251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401914    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307168510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401924    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311048212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401938    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311102810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401948    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311112372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401958    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311392763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401968    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477710685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.401977    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477915589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.401987    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477973011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.401997    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.478174177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402013    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494763986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402025    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494800644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402041    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494808461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402054    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494862529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402095    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502898043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402108    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502995270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402118    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503073968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402128    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503177666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402142    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514475802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402152    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514545542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402162    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514558720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402171    2067 command_runner.go:130] > Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514861602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402181    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352521512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402191    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352642496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402204    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352656093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402214    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352791637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402234    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466457350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402244    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466735785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402254    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466880396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402264    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.467238809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402274    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.588902278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402284    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589163604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402297    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589274541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402342    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589440546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402355    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647495237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402365    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647976971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402374    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648164904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402385    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648777321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402395    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931384339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402404    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931493404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402414    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931506590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402424    2067 command_runner.go:130] > Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931657800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402434    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162455309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402444    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162701812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402459    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162759021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402469    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.163278524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402481    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398231755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402491    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398332961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402502    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398346800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402512    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398679657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402523    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496031526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0728 17:59:16.402533    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496097397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0728 17:59:16.402626    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496109988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402640    2067 command_runner.go:130] > Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496427740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0728 17:59:16.402650    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.034495755Z" level=info msg="shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	I0728 17:59:16.402661    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.034611180Z" level=info msg="ignoring event" container=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402671    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035089465Z" level=warning msg="cleaning up after shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	I0728 17:59:16.402679    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035158793Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.402690    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.111407350Z" level=info msg="ignoring event" container=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402700    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111763077Z" level=info msg="shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	I0728 17:59:16.402710    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111847732Z" level=warning msg="cleaning up after shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	I0728 17:59:16.402723    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111857207Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.402741    2067 command_runner.go:130] > Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.123414689Z" level=warning msg="cleanup warnings time=\"2024-07-29T00:58:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0728 17:59:16.402749    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.395458525Z" level=info msg="Processing signal 'terminated'"
	I0728 17:59:16.402760    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	I0728 17:59:16.402770    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448770229Z" level=info msg="shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	I0728 17:59:16.402780    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448865323Z" level=warning msg="cleaning up after shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	I0728 17:59:16.402788    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448875148Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.402799    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.449287739Z" level=info msg="ignoring event" container=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402813    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.499547099Z" level=info msg="ignoring event" container=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.402822    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.499966665Z" level=info msg="shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	I0728 17:59:16.402832    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500680178Z" level=warning msg="cleaning up after shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	I0728 17:59:16.403003    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500689740Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403018    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.552833990Z" level=info msg="ignoring event" container=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403028    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553672267Z" level=info msg="shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	I0728 17:59:16.403038    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553743408Z" level=warning msg="cleaning up after shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	I0728 17:59:16.403046    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553752377Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403056    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553855742Z" level=info msg="shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	I0728 17:59:16.403066    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554337023Z" level=warning msg="cleaning up after shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	I0728 17:59:16.403081    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554382869Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403094    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.554596147Z" level=info msg="ignoring event" container=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403108    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.558112577Z" level=info msg="ignoring event" container=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403118    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558368677Z" level=info msg="shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	I0728 17:59:16.403129    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558831783Z" level=warning msg="cleaning up after shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	I0728 17:59:16.403140    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558877595Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403155    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.562511968Z" level=info msg="ignoring event" container=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403164    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562633349Z" level=info msg="shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	I0728 17:59:16.403175    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562699850Z" level=warning msg="cleaning up after shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	I0728 17:59:16.403183    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562708631Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403198    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.573772031Z" level=info msg="ignoring event" container=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403207    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574181868Z" level=info msg="shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	I0728 17:59:16.403218    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574256709Z" level=warning msg="cleaning up after shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	I0728 17:59:16.403226    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574265704Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403235    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584212617Z" level=info msg="shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	I0728 17:59:16.403247    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584332022Z" level=warning msg="cleaning up after shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	I0728 17:59:16.403255    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584390716Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403266    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589054926Z" level=info msg="ignoring event" container=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403278    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589169542Z" level=info msg="ignoring event" container=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403294    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589300211Z" level=info msg="ignoring event" container=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403304    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591696979Z" level=info msg="shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	I0728 17:59:16.403314    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591753738Z" level=warning msg="cleaning up after shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	I0728 17:59:16.403322    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591762049Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403333    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.592142540Z" level=info msg="ignoring event" container=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403342    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.593743099Z" level=info msg="shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	I0728 17:59:16.403356    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.594556393Z" level=info msg="ignoring event" container=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403368    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594895783Z" level=warning msg="cleaning up after shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	I0728 17:59:16.403376    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594940013Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403386    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594341936Z" level=info msg="shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	I0728 17:59:16.403396    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599531022Z" level=warning msg="cleaning up after shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	I0728 17:59:16.403405    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599564549Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403492    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594363171Z" level=info msg="shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	I0728 17:59:16.403510    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603697728Z" level=warning msg="cleaning up after shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	I0728 17:59:16.403517    2067 command_runner.go:130] > Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603706128Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403528    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1271]: time="2024-07-29T00:58:10.446248538Z" level=info msg="ignoring event" container=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403538    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446453571Z" level=info msg="shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	I0728 17:59:16.403548    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446483266Z" level=warning msg="cleaning up after shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	I0728 17:59:16.403555    2067 command_runner.go:130] > Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446489626Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403572    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.437850835Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924
	I0728 17:59:16.403584    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.461680643Z" level=info msg="ignoring event" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0728 17:59:16.403593    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462134272Z" level=info msg="shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	I0728 17:59:16.403604    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462432578Z" level=warning msg="cleaning up after shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	I0728 17:59:16.403611    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462709085Z" level=info msg="cleaning up dead shim" namespace=moby
	I0728 17:59:16.403621    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.480818399Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 17:59:16.403628    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481284133Z" level=info msg="Daemon shutdown complete"
	I0728 17:59:16.403638    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481351043Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 17:59:16.403648    2067 command_runner.go:130] > Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481513507Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 17:59:16.403658    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	I0728 17:59:16.403666    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	I0728 17:59:16.403673    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Consumed 2.317s CPU time.
	I0728 17:59:16.403686    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	I0728 17:59:16.403696    2067 command_runner.go:130] > Jul 29 00:58:16 functional-596000 dockerd[3649]: time="2024-07-29T00:58:16.519764667Z" level=info msg="Starting up"
	I0728 17:59:16.403704    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 dockerd[3649]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0728 17:59:16.403716    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0728 17:59:16.403721    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0728 17:59:16.403735    2067 command_runner.go:130] > Jul 29 00:59:16 functional-596000 systemd[1]: Failed to start Docker Application Container Engine.
	I0728 17:59:16.437925    2067 out.go:177] 
	W0728 17:59:16.458779    2067 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 00:57:13 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797365474Z" level=info msg="Starting up"
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.797812498Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:13 functional-596000 dockerd[514]: time="2024-07-29T00:57:13.799746278Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.817209839Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833006693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833027623Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833063048Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833073437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833127019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833187696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833331655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833366436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833378117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833385070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833441900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.833582244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835042594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835101927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835241609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835284736Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835372957Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.835438009Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837622113Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837721038Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837768434Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837808041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837840429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.837936427Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838141537Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838308394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838347183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838384605Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838419232Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838451200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838482769Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838513376Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838546249Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838577148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838606171Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838634886Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838675799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838712449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838744137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838773905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838803063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838838392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838872381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838902742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838935507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838966734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.838994870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839022479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839050538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839129561Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839170342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839201357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839229605Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839300959Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839344419Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839377180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839407452Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839436175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839464659Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839492819Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839668472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839754400Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839823157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:13 functional-596000 dockerd[521]: time="2024-07-29T00:57:13.839861606Z" level=info msg="containerd successfully booted in 0.023368s"
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.840311727Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.846796524Z" level=info msg="Loading containers: start."
	Jul 29 00:57:14 functional-596000 dockerd[514]: time="2024-07-29T00:57:14.931863378Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.016652031Z" level=info msg="Loading containers: done."
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023601347Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.023702083Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056431503Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:15 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:15 functional-596000 dockerd[514]: time="2024-07-29T00:57:15.056529625Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.221309736Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:57:16 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222558264Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222867738Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222936309Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:57:16 functional-596000 dockerd[514]: time="2024-07-29T00:57:16.222951150Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:57:17 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:57:17 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:57:17 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251533872Z" level=info msg="Starting up"
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.251992238Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:17 functional-596000 dockerd[915]: time="2024-07-29T00:57:17.252592079Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=921
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.268000022Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283126898Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283245051Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283296543Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283329167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283372267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283410007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283528327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283565809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283595793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283624050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283661411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.283760929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285373046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285426942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285565612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285609205Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285647249Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285681508Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285827566Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285877187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285910515Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285942139Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.285973140Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286024088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286256555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286331375Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286365544Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286394955Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286424527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286453657Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286484741Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286516234Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286546601Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286579857Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286611348Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286641030Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286674739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286706453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286744971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286779178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286808354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286841128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286870616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286899451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286928600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286965950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.286999059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287027761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287057255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287089564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287124670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287221056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287260008Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287333254Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287377987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287446465Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287477602Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287506315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287535151Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287565710Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287745237Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287832539Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287924952Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:17 functional-596000 dockerd[921]: time="2024-07-29T00:57:17.287968311Z" level=info msg="containerd successfully booted in 0.020373s"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.331881234Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.335683791Z" level=info msg="Loading containers: start."
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.404366470Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.461547560Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.503511121Z" level=info msg="Loading containers: done."
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521014736Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.521083688Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.540963112Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:18 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:18 functional-596000 dockerd[915]: time="2024-07-29T00:57:18.542092231Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.000429486Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001308281Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001458767Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001520154Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:57:23 functional-596000 dockerd[915]: time="2024-07-29T00:57:23.001554783Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:57:23 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:57:24 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:57:24 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:57:24 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.044513551Z" level=info msg="Starting up"
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045165961Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 00:57:24 functional-596000 dockerd[1271]: time="2024-07-29T00:57:24.045779957Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1278
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.063819849Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078790454Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078861840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078909723Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078942873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.078982590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079016511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079177290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079221517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079256669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079285006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079322780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.079417461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.080975138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081019961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081189849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081230906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081268915Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081307449Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081514588Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081566132Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081599424Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081630245Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081660433Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081711134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.081935254Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082003682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082071378Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082106832Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082141456Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082171351Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082199983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082230279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082259644Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082288397Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082316493Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082344152Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082389242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082427480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082458087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082487933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082526801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082561143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082590891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082620127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082660502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082695658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082725026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082756282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082785403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082815558Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082849349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082880362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082908909Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.082981072Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083071337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083112046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083141558Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083173553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083204127Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083234220Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083428164Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083514894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083575557Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 00:57:24 functional-596000 dockerd[1278]: time="2024-07-29T00:57:24.083620565Z" level=info msg="containerd successfully booted in 0.020314s"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.066266767Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.253647977Z" level=info msg="Loading containers: start."
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.324491630Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.382701703Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.431702433Z" level=info msg="Loading containers: done."
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440864156Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.440919518Z" level=info msg="Daemon has completed initialization"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461512437Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 00:57:25 functional-596000 dockerd[1271]: time="2024-07-29T00:57:25.461664145Z" level=info msg="API listen on [::]:2376"
	Jul 29 00:57:25 functional-596000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260281303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260392108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260412572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.260489352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276138579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276301037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276372584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.276521849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.306891402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307066345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307094251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.307168510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311048212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311102810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311112372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.311392763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477710685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477915589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.477973011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.478174177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494763986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494800644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494808461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.494862529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502898043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.502995270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503073968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.503177666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514475802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514545542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514558720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:31 functional-596000 dockerd[1278]: time="2024-07-29T00:57:31.514861602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352521512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352642496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352656093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.352791637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466457350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466735785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.466880396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.467238809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.588902278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589163604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589274541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.589440546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647495237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.647976971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648164904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.648777321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931384339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931493404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931506590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:50 functional-596000 dockerd[1278]: time="2024-07-29T00:57:50.931657800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162455309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162701812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.162759021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.163278524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398231755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398332961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398346800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.398679657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496031526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496097397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496109988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:51 functional-596000 dockerd[1278]: time="2024-07-29T00:57:51.496427740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.034495755Z" level=info msg="shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.034611180Z" level=info msg="ignoring event" container=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035089465Z" level=warning msg="cleaning up after shim disconnected" id=411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.035158793Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1271]: time="2024-07-29T00:58:01.111407350Z" level=info msg="ignoring event" container=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111763077Z" level=info msg="shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111847732Z" level=warning msg="cleaning up after shim disconnected" id=66079ec12fb8782df9d4cee8292004e656d875eaf7af2c6e1f6bd76a4b5ee5f8 namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.111857207Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:01 functional-596000 dockerd[1278]: time="2024-07-29T00:58:01.123414689Z" level=warning msg="cleanup warnings time=\"2024-07-29T00:58:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.395458525Z" level=info msg="Processing signal 'terminated'"
	Jul 29 00:58:05 functional-596000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448770229Z" level=info msg="shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448865323Z" level=warning msg="cleaning up after shim disconnected" id=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.448875148Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.449287739Z" level=info msg="ignoring event" container=5f9472f99b8bfa4af1b508b1a2d33e0e21cb40b9392905cb5113ceb74336ac24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.499547099Z" level=info msg="ignoring event" container=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.499966665Z" level=info msg="shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500680178Z" level=warning msg="cleaning up after shim disconnected" id=cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.500689740Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.552833990Z" level=info msg="ignoring event" container=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553672267Z" level=info msg="shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553743408Z" level=warning msg="cleaning up after shim disconnected" id=28af7c747800db248fc20586d6bac846b00e5ddfdb8418e7e7528f81b283a82e namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553752377Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.553855742Z" level=info msg="shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554337023Z" level=warning msg="cleaning up after shim disconnected" id=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.554382869Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.554596147Z" level=info msg="ignoring event" container=e8b459542068d8cdc28f495236f6bdb2084dcc9aa3480bd9ceb656b35a07891f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.558112577Z" level=info msg="ignoring event" container=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558368677Z" level=info msg="shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558831783Z" level=warning msg="cleaning up after shim disconnected" id=fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.558877595Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.562511968Z" level=info msg="ignoring event" container=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562633349Z" level=info msg="shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562699850Z" level=warning msg="cleaning up after shim disconnected" id=c7df3f760daa4466ddfdd0bc6d9dc986811adbc3755904e3fc9a6ea4a11bee02 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.562708631Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.573772031Z" level=info msg="ignoring event" container=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574181868Z" level=info msg="shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574256709Z" level=warning msg="cleaning up after shim disconnected" id=aff9c378cc075e67d041611d4af1131d8aae9c031b4cf217fba3abb8db2a1937 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.574265704Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584212617Z" level=info msg="shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584332022Z" level=warning msg="cleaning up after shim disconnected" id=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.584390716Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589054926Z" level=info msg="ignoring event" container=ac96c3a2bbe68d429ea15cba7b7107bb195f8c392c19f28825604b182d86287f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589169542Z" level=info msg="ignoring event" container=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.589300211Z" level=info msg="ignoring event" container=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591696979Z" level=info msg="shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591753738Z" level=warning msg="cleaning up after shim disconnected" id=1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.591762049Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.592142540Z" level=info msg="ignoring event" container=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.593743099Z" level=info msg="shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1271]: time="2024-07-29T00:58:05.594556393Z" level=info msg="ignoring event" container=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594895783Z" level=warning msg="cleaning up after shim disconnected" id=dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594940013Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594341936Z" level=info msg="shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599531022Z" level=warning msg="cleaning up after shim disconnected" id=4fd5c30d405baf687bfa96b3fb5cfe8b483920e061e62867f1cf604584cdea21 namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.599564549Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.594363171Z" level=info msg="shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603697728Z" level=warning msg="cleaning up after shim disconnected" id=019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf namespace=moby
	Jul 29 00:58:05 functional-596000 dockerd[1278]: time="2024-07-29T00:58:05.603706128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1271]: time="2024-07-29T00:58:10.446248538Z" level=info msg="ignoring event" container=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446453571Z" level=info msg="shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446483266Z" level=warning msg="cleaning up after shim disconnected" id=15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad namespace=moby
	Jul 29 00:58:10 functional-596000 dockerd[1278]: time="2024-07-29T00:58:10.446489626Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.437850835Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.461680643Z" level=info msg="ignoring event" container=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462134272Z" level=info msg="shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462432578Z" level=warning msg="cleaning up after shim disconnected" id=c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924 namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1278]: time="2024-07-29T00:58:15.462709085Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.480818399Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481284133Z" level=info msg="Daemon shutdown complete"
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481351043Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 00:58:15 functional-596000 dockerd[1271]: time="2024-07-29T00:58:15.481513507Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 00:58:16 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 00:58:16 functional-596000 systemd[1]: docker.service: Consumed 2.317s CPU time.
	Jul 29 00:58:16 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 00:58:16 functional-596000 dockerd[3649]: time="2024-07-29T00:58:16.519764667Z" level=info msg="Starting up"
	Jul 29 00:59:16 functional-596000 dockerd[3649]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 00:59:16 functional-596000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 00:59:16 functional-596000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0728 17:59:16.459445    2067 out.go:239] * 
	W0728 17:59:16.460660    2067 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 17:59:16.543445    2067 out.go:177] 
	
	
	==> Docker <==
	Jul 29 01:18:19 functional-596000 dockerd[8181]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 01:18:19 functional-596000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 01:18:19 functional-596000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 01:18:19 functional-596000 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="error getting RW layer size for container ID '411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID '411470dfcd499a9e4d37d11f384efd0cd58a8b5aecb8b7872e8e901bf66917eb'"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="error getting RW layer size for container ID 'c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c41f586ec0caa3d5b1efa6d4eaa6c0436e0bb30fe21155af2d31327fd44d3924'"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="error getting RW layer size for container ID '019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID '019898b9ca1478f2b536d0466760da6ccb1baf2c0d05dfebe449b78ac722eccf'"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="error getting RW layer size for container ID 'dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'dba85891616d6c296bb9c7a5606a187bed65a1efedcbd9ee50dd765495b516d5'"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="error getting RW layer size for container ID 'cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'cce9894dfc1a136bf45b9ea5ca41b9f84325636187277cb27e6292b03848d634'"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="error getting RW layer size for container ID 'fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fef91d48fa4bfb6e9f7254beef1c4fdc5ddf31d64d0369dbb427425de9454be6'"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="error getting RW layer size for container ID '15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID '15e20ae31c2e9692e0ee64fde249d3ce87129cfac281e9fbc4d74c2454cc43ad'"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="error getting RW layer size for container ID '1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1bb2674bac0e8985ce01a06b31476252be5f65ac66d82a2e08b2ea86e4ec5aed'"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 29 01:18:19 functional-596000 cri-dockerd[1168]: time="2024-07-29T01:18:19Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:18:19 functional-596000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Jul 29 01:18:19 functional-596000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 01:18:19 functional-596000 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-29T01:18:22Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.071501] systemd-fstab-generator[907]: Ignoring "noauto" option for root device
	[  +2.464238] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +0.103266] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.116452] systemd-fstab-generator[1145]: Ignoring "noauto" option for root device
	[  +0.130252] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +3.974695] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	[  +0.052634] kauditd_printk_skb: 158 callbacks suppressed
	[  +2.632602] systemd-fstab-generator[1511]: Ignoring "noauto" option for root device
	[  +4.717931] systemd-fstab-generator[1694]: Ignoring "noauto" option for root device
	[  +0.052232] kauditd_printk_skb: 70 callbacks suppressed
	[  +4.965900] systemd-fstab-generator[2101]: Ignoring "noauto" option for root device
	[  +0.068473] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.556217] systemd-fstab-generator[2344]: Ignoring "noauto" option for root device
	[  +0.144175] kauditd_printk_skb: 12 callbacks suppressed
	[ +10.927376] kauditd_printk_skb: 98 callbacks suppressed
	[Jul29 00:58] systemd-fstab-generator[3180]: Ignoring "noauto" option for root device
	[  +0.280018] systemd-fstab-generator[3216]: Ignoring "noauto" option for root device
	[  +0.136220] systemd-fstab-generator[3228]: Ignoring "noauto" option for root device
	[  +0.135284] systemd-fstab-generator[3242]: Ignoring "noauto" option for root device
	[  +5.159757] kauditd_printk_skb: 101 callbacks suppressed
	[Jul29 01:02] clocksource: timekeeping watchdog on CPU0: Marking clocksource 'tsc' as unstable because the skew is too large:
	[  +0.000049] clocksource:                       'hpet' wd_now: b6c345a4 wd_last: b5ef4422 mask: ffffffff
	[  +0.000044] clocksource:                       'tsc' cs_now: 587809d696b cs_last: 586789366bd mask: ffffffffffffffff
	[  +0.000172] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
	[  +0.000295] clocksource: Checking clocksource tsc synchronization from CPU 0.
	
	
	==> kernel <==
	 01:19:20 up 22 min,  0 users,  load average: 0.00, 0.00, 0.00
	Linux functional-596000 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 01:19:10 functional-596000 kubelet[2108]: E0729 01:19:10.253966    2108 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-596000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-596000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:19:10 functional-596000 kubelet[2108]: E0729 01:19:10.255645    2108 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-596000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-596000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:19:10 functional-596000 kubelet[2108]: E0729 01:19:10.255948    2108 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 29 01:19:11 functional-596000 kubelet[2108]: E0729 01:19:11.477028    2108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 192.169.0.4:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-596000.17e689235b4cebd9  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-596000,UID:471ce4342a500a995eaa994abbd56071,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.169.0.4:8441/livez\": dial tcp 192.169.0.4:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-596000,},FirstTimestamp:2024-07-29 00:58:12.464421849 +0000 UTC m=+37.011858411,LastTimestamp:2024-07-29 00:58:12.464421849 +0000 UTC m=+37.011858411,Co
unt:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-596000,}"
	Jul 29 01:19:11 functional-596000 kubelet[2108]: E0729 01:19:11.816216    2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-596000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused" interval="7s"
	Jul 29 01:19:12 functional-596000 kubelet[2108]: E0729 01:19:12.595053    2108 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 21m8.582429695s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 29 01:19:14 functional-596000 kubelet[2108]: I0729 01:19:14.546385    2108 status_manager.go:853] "Failed to get status for pod" podUID="471ce4342a500a995eaa994abbd56071" pod="kube-system/kube-apiserver-functional-596000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-596000\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Jul 29 01:19:17 functional-596000 kubelet[2108]: E0729 01:19:17.602210    2108 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 21m13.589793336s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 29 01:19:18 functional-596000 kubelet[2108]: E0729 01:19:18.823716    2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-596000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused" interval="7s"
	Jul 29 01:19:19 functional-596000 kubelet[2108]: E0729 01:19:19.932566    2108 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 29 01:19:19 functional-596000 kubelet[2108]: E0729 01:19:19.932621    2108 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:19:19 functional-596000 kubelet[2108]: I0729 01:19:19.932645    2108 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:19:19 functional-596000 kubelet[2108]: E0729 01:19:19.932684    2108 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:19:19 functional-596000 kubelet[2108]: E0729 01:19:19.932714    2108 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:19:19 functional-596000 kubelet[2108]: E0729 01:19:19.932851    2108 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 29 01:19:19 functional-596000 kubelet[2108]: E0729 01:19:19.932891    2108 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:19:19 functional-596000 kubelet[2108]: E0729 01:19:19.933147    2108 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 29 01:19:19 functional-596000 kubelet[2108]: E0729 01:19:19.933181    2108 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:19:19 functional-596000 kubelet[2108]: E0729 01:19:19.933201    2108 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:19:19 functional-596000 kubelet[2108]: E0729 01:19:19.933313    2108 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 29 01:19:19 functional-596000 kubelet[2108]: E0729 01:19:19.933358    2108 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 29 01:19:19 functional-596000 kubelet[2108]: E0729 01:19:19.933386    2108 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 29 01:19:19 functional-596000 kubelet[2108]: E0729 01:19:19.933945    2108 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 29 01:19:19 functional-596000 kubelet[2108]: E0729 01:19:19.933976    2108 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 29 01:19:19 functional-596000 kubelet[2108]: E0729 01:19:19.934163    2108 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 18:18:19.592964    2609 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:18:19.608156    2609 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:18:19.623275    2609 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:18:19.638068    2609 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:18:19.652971    2609 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:18:19.667759    2609 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:18:19.682881    2609 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0728 18:18:19.696202    2609 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-596000 -n functional-596000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-596000 -n functional-596000: exit status 2 (155.704013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-596000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (120.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (76.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-168000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-168000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (1m16.096856188s)

                                                
                                                
-- stdout --
	* [ha-168000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-168000" primary control-plane node in "ha-168000" cluster
	* Restarting existing hyperkit VM for "ha-168000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:32:16.466467    4145 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:32:16.466751    4145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:32:16.466758    4145 out.go:304] Setting ErrFile to fd 2...
	I0728 18:32:16.466762    4145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:32:16.466941    4145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 18:32:16.468373    4145 out.go:298] Setting JSON to false
	I0728 18:32:16.490949    4145 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3707,"bootTime":1722213029,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 18:32:16.491043    4145 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:32:16.512662    4145 out.go:177] * [ha-168000] minikube v1.33.1 on Darwin 14.5
	I0728 18:32:16.554568    4145 notify.go:220] Checking for updates...
	I0728 18:32:16.576288    4145 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:32:16.597137    4145 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:32:16.618410    4145 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 18:32:16.639556    4145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:32:16.660381    4145 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 18:32:16.681360    4145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:32:16.703190    4145 config.go:182] Loaded profile config "ha-168000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:32:16.703876    4145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:32:16.703961    4145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:32:16.713591    4145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52160
	I0728 18:32:16.713971    4145 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:32:16.714392    4145 main.go:141] libmachine: Using API Version  1
	I0728 18:32:16.714401    4145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:32:16.714616    4145 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:32:16.714825    4145 main.go:141] libmachine: (ha-168000) Calling .DriverName
	I0728 18:32:16.715045    4145 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:32:16.715282    4145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:32:16.715305    4145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:32:16.723571    4145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52162
	I0728 18:32:16.723911    4145 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:32:16.724227    4145 main.go:141] libmachine: Using API Version  1
	I0728 18:32:16.724238    4145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:32:16.724478    4145 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:32:16.724604    4145 main.go:141] libmachine: (ha-168000) Calling .DriverName
	I0728 18:32:16.753218    4145 out.go:177] * Using the hyperkit driver based on existing profile
	I0728 18:32:16.795399    4145 start.go:297] selected driver: hyperkit
	I0728 18:32:16.795429    4145 start.go:901] validating driver "hyperkit" against &{Name:ha-168000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:ha-168000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:32:16.795658    4145 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:32:16.795843    4145 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:32:16.796058    4145 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 18:32:16.805529    4145 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 18:32:16.809323    4145 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:32:16.809348    4145 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 18:32:16.811931    4145 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:32:16.811970    4145 cni.go:84] Creating CNI manager for ""
	I0728 18:32:16.811977    4145 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0728 18:32:16.812048    4145 start.go:340] cluster config:
	{Name:ha-168000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-168000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:32:16.812145    4145 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:32:16.833273    4145 out.go:177] * Starting "ha-168000" primary control-plane node in "ha-168000" cluster
	I0728 18:32:16.854243    4145 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:32:16.854329    4145 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 18:32:16.854353    4145 cache.go:56] Caching tarball of preloaded images
	I0728 18:32:16.854574    4145 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 18:32:16.854592    4145 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:32:16.854767    4145 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/ha-168000/config.json ...
	I0728 18:32:16.855703    4145 start.go:360] acquireMachinesLock for ha-168000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:32:16.855844    4145 start.go:364] duration metric: took 117.001µs to acquireMachinesLock for "ha-168000"
	I0728 18:32:16.855872    4145 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:32:16.855885    4145 fix.go:54] fixHost starting: 
	I0728 18:32:16.856215    4145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:32:16.856249    4145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:32:16.865090    4145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52164
	I0728 18:32:16.865456    4145 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:32:16.865829    4145 main.go:141] libmachine: Using API Version  1
	I0728 18:32:16.865843    4145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:32:16.866054    4145 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:32:16.866193    4145 main.go:141] libmachine: (ha-168000) Calling .DriverName
	I0728 18:32:16.866310    4145 main.go:141] libmachine: (ha-168000) Calling .GetState
	I0728 18:32:16.866402    4145 main.go:141] libmachine: (ha-168000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:32:16.866474    4145 main.go:141] libmachine: (ha-168000) DBG | hyperkit pid from json: 3788
	I0728 18:32:16.867353    4145 main.go:141] libmachine: (ha-168000) DBG | hyperkit pid 3788 missing from process table
	I0728 18:32:16.867398    4145 fix.go:112] recreateIfNeeded on ha-168000: state=Stopped err=<nil>
	I0728 18:32:16.867415    4145 main.go:141] libmachine: (ha-168000) Calling .DriverName
	W0728 18:32:16.867493    4145 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:32:16.909199    4145 out.go:177] * Restarting existing hyperkit VM for "ha-168000" ...
	I0728 18:32:16.930377    4145 main.go:141] libmachine: (ha-168000) Calling .Start
	I0728 18:32:16.930654    4145 main.go:141] libmachine: (ha-168000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:32:16.930698    4145 main.go:141] libmachine: (ha-168000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/hyperkit.pid
	I0728 18:32:16.932402    4145 main.go:141] libmachine: (ha-168000) DBG | hyperkit pid 3788 missing from process table
	I0728 18:32:16.932416    4145 main.go:141] libmachine: (ha-168000) DBG | pid 3788 is in state "Stopped"
	I0728 18:32:16.932435    4145 main.go:141] libmachine: (ha-168000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/hyperkit.pid...
	I0728 18:32:16.932639    4145 main.go:141] libmachine: (ha-168000) DBG | Using UUID f81d08b6-afc7-461b-b0b8-646cbb74222f
	I0728 18:32:17.056264    4145 main.go:141] libmachine: (ha-168000) DBG | Generated MAC 9a:f7:34:b6:18:f
	I0728 18:32:17.056293    4145 main.go:141] libmachine: (ha-168000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-168000
	I0728 18:32:17.056410    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f81d08b6-afc7-461b-b0b8-646cbb74222f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c26c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0728 18:32:17.056456    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f81d08b6-afc7-461b-b0b8-646cbb74222f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c26c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0728 18:32:17.056490    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f81d08b6-afc7-461b-b0b8-646cbb74222f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/ha-168000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-168000"}
	I0728 18:32:17.056524    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f81d08b6-afc7-461b-b0b8-646cbb74222f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/ha-168000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-168000"
	I0728 18:32:17.056538    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 18:32:17.057998    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 DEBUG: hyperkit: Pid is 4158
	I0728 18:32:17.058374    4145 main.go:141] libmachine: (ha-168000) DBG | Attempt 0
	I0728 18:32:17.058391    4145 main.go:141] libmachine: (ha-168000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:32:17.058464    4145 main.go:141] libmachine: (ha-168000) DBG | hyperkit pid from json: 4158
	I0728 18:32:17.060022    4145 main.go:141] libmachine: (ha-168000) DBG | Searching for 9a:f7:34:b6:18:f in /var/db/dhcpd_leases ...
	I0728 18:32:17.060085    4145 main.go:141] libmachine: (ha-168000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0728 18:32:17.060103    4145 main.go:141] libmachine: (ha-168000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:32:17.060117    4145 main.go:141] libmachine: (ha-168000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:32:17.060125    4145 main.go:141] libmachine: (ha-168000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:32:17.060134    4145 main.go:141] libmachine: (ha-168000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a841be}
	I0728 18:32:17.060153    4145 main.go:141] libmachine: (ha-168000) DBG | Found match: 9a:f7:34:b6:18:f
	I0728 18:32:17.060162    4145 main.go:141] libmachine: (ha-168000) DBG | IP: 192.169.0.5
	I0728 18:32:17.060207    4145 main.go:141] libmachine: (ha-168000) Calling .GetConfigRaw
	I0728 18:32:17.060879    4145 main.go:141] libmachine: (ha-168000) Calling .GetIP
	I0728 18:32:17.061075    4145 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/ha-168000/config.json ...
	I0728 18:32:17.061537    4145 machine.go:94] provisionDockerMachine start ...
	I0728 18:32:17.061550    4145 main.go:141] libmachine: (ha-168000) Calling .DriverName
	I0728 18:32:17.061685    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHHostname
	I0728 18:32:17.061795    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHPort
	I0728 18:32:17.061928    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:17.062038    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:17.062129    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHUsername
	I0728 18:32:17.062255    4145 main.go:141] libmachine: Using SSH client type: native
	I0728 18:32:17.062459    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xabe30c0] 0xabe5e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0728 18:32:17.062468    4145 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 18:32:17.065903    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 18:32:17.124935    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 18:32:17.125665    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:32:17.125682    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:32:17.125691    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:32:17.125699    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:32:17.507154    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 18:32:17.507169    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 18:32:17.621950    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:32:17.621970    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:32:17.621995    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:32:17.622011    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:32:17.622918    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 18:32:17.622930    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 18:32:23.209025    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0728 18:32:23.209040    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0728 18:32:23.209047    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0728 18:32:23.233762    4145 main.go:141] libmachine: (ha-168000) DBG | 2024/07/28 18:32:23 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0728 18:32:28.127583    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0728 18:32:28.127597    4145 main.go:141] libmachine: (ha-168000) Calling .GetMachineName
	I0728 18:32:28.127785    4145 buildroot.go:166] provisioning hostname "ha-168000"
	I0728 18:32:28.127800    4145 main.go:141] libmachine: (ha-168000) Calling .GetMachineName
	I0728 18:32:28.127911    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHHostname
	I0728 18:32:28.128014    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHPort
	I0728 18:32:28.128100    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:28.128200    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:28.128285    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHUsername
	I0728 18:32:28.128447    4145 main.go:141] libmachine: Using SSH client type: native
	I0728 18:32:28.128649    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xabe30c0] 0xabe5e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0728 18:32:28.128657    4145 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-168000 && echo "ha-168000" | sudo tee /etc/hostname
	I0728 18:32:28.195974    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168000
	
	I0728 18:32:28.195993    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHHostname
	I0728 18:32:28.196139    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHPort
	I0728 18:32:28.196241    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:28.196335    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:28.196425    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHUsername
	I0728 18:32:28.196572    4145 main.go:141] libmachine: Using SSH client type: native
	I0728 18:32:28.196718    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xabe30c0] 0xabe5e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0728 18:32:28.196730    4145 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-168000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-168000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-168000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 18:32:28.259492    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:32:28.259515    4145 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 18:32:28.259535    4145 buildroot.go:174] setting up certificates
	I0728 18:32:28.259544    4145 provision.go:84] configureAuth start
	I0728 18:32:28.259551    4145 main.go:141] libmachine: (ha-168000) Calling .GetMachineName
	I0728 18:32:28.259680    4145 main.go:141] libmachine: (ha-168000) Calling .GetIP
	I0728 18:32:28.259806    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHHostname
	I0728 18:32:28.259889    4145 provision.go:143] copyHostCerts
	I0728 18:32:28.259922    4145 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:32:28.259995    4145 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 18:32:28.260004    4145 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:32:28.260152    4145 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 18:32:28.260413    4145 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:32:28.260454    4145 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 18:32:28.260459    4145 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:32:28.260538    4145 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 18:32:28.260688    4145 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:32:28.260727    4145 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 18:32:28.260732    4145 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:32:28.260813    4145 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 18:32:28.260966    4145 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.ha-168000 san=[127.0.0.1 192.169.0.5 ha-168000 localhost minikube]
	I0728 18:32:28.414986    4145 provision.go:177] copyRemoteCerts
	I0728 18:32:28.415044    4145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 18:32:28.415059    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHHostname
	I0728 18:32:28.415248    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHPort
	I0728 18:32:28.415343    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:28.415438    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHUsername
	I0728 18:32:28.415521    4145 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/id_rsa Username:docker}
	I0728 18:32:28.451564    4145 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 18:32:28.451645    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0728 18:32:28.471850    4145 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 18:32:28.471911    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 18:32:28.491572    4145 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 18:32:28.491633    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 18:32:28.511347    4145 provision.go:87] duration metric: took 251.798785ms to configureAuth
	I0728 18:32:28.511360    4145 buildroot.go:189] setting minikube options for container-runtime
	I0728 18:32:28.511524    4145 config.go:182] Loaded profile config "ha-168000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:32:28.511537    4145 main.go:141] libmachine: (ha-168000) Calling .DriverName
	I0728 18:32:28.511677    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHHostname
	I0728 18:32:28.511763    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHPort
	I0728 18:32:28.511843    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:28.511920    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:28.512022    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHUsername
	I0728 18:32:28.512138    4145 main.go:141] libmachine: Using SSH client type: native
	I0728 18:32:28.512261    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xabe30c0] 0xabe5e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0728 18:32:28.512268    4145 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 18:32:28.570201    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 18:32:28.570213    4145 buildroot.go:70] root file system type: tmpfs
	I0728 18:32:28.570299    4145 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 18:32:28.570313    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHHostname
	I0728 18:32:28.570471    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHPort
	I0728 18:32:28.570564    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:28.570669    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:28.570769    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHUsername
	I0728 18:32:28.570908    4145 main.go:141] libmachine: Using SSH client type: native
	I0728 18:32:28.571060    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xabe30c0] 0xabe5e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0728 18:32:28.571103    4145 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 18:32:28.637163    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 18:32:28.637184    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHHostname
	I0728 18:32:28.637330    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHPort
	I0728 18:32:28.637433    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:28.637519    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:28.637594    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHUsername
	I0728 18:32:28.637724    4145 main.go:141] libmachine: Using SSH client type: native
	I0728 18:32:28.637868    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xabe30c0] 0xabe5e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0728 18:32:28.637880    4145 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 18:32:30.317904    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0728 18:32:30.317920    4145 machine.go:97] duration metric: took 13.25663555s to provisionDockerMachine
	I0728 18:32:30.317938    4145 start.go:293] postStartSetup for "ha-168000" (driver="hyperkit")
	I0728 18:32:30.317947    4145 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 18:32:30.317957    4145 main.go:141] libmachine: (ha-168000) Calling .DriverName
	I0728 18:32:30.318167    4145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 18:32:30.318187    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHHostname
	I0728 18:32:30.318283    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHPort
	I0728 18:32:30.318373    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:30.318458    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHUsername
	I0728 18:32:30.318543    4145 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/id_rsa Username:docker}
	I0728 18:32:30.359426    4145 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 18:32:30.362845    4145 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 18:32:30.362862    4145 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 18:32:30.362967    4145 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 18:32:30.363168    4145 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 18:32:30.363174    4145 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /etc/ssl/certs/15332.pem
	I0728 18:32:30.363381    4145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 18:32:30.372991    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:32:30.406293    4145 start.go:296] duration metric: took 88.346559ms for postStartSetup
	I0728 18:32:30.406315    4145 main.go:141] libmachine: (ha-168000) Calling .DriverName
	I0728 18:32:30.406501    4145 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0728 18:32:30.406515    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHHostname
	I0728 18:32:30.406609    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHPort
	I0728 18:32:30.406687    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:30.406761    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHUsername
	I0728 18:32:30.406834    4145 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/id_rsa Username:docker}
	I0728 18:32:30.441093    4145 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0728 18:32:30.441160    4145 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0728 18:32:30.474657    4145 fix.go:56] duration metric: took 13.619043526s for fixHost
	I0728 18:32:30.474680    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHHostname
	I0728 18:32:30.474829    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHPort
	I0728 18:32:30.474916    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:30.475021    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:30.475109    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHUsername
	I0728 18:32:30.475238    4145 main.go:141] libmachine: Using SSH client type: native
	I0728 18:32:30.475387    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xabe30c0] 0xabe5e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0728 18:32:30.475395    4145 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0728 18:32:30.531225    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722216750.704932578
	
	I0728 18:32:30.531238    4145 fix.go:216] guest clock: 1722216750.704932578
	I0728 18:32:30.531243    4145 fix.go:229] Guest: 2024-07-28 18:32:30.704932578 -0700 PDT Remote: 2024-07-28 18:32:30.47467 -0700 PDT m=+14.043319030 (delta=230.262578ms)
	I0728 18:32:30.531262    4145 fix.go:200] guest clock delta is within tolerance: 230.262578ms
	I0728 18:32:30.531266    4145 start.go:83] releasing machines lock for "ha-168000", held for 13.675683299s
	I0728 18:32:30.531288    4145 main.go:141] libmachine: (ha-168000) Calling .DriverName
	I0728 18:32:30.531425    4145 main.go:141] libmachine: (ha-168000) Calling .GetIP
	I0728 18:32:30.531522    4145 main.go:141] libmachine: (ha-168000) Calling .DriverName
	I0728 18:32:30.531834    4145 main.go:141] libmachine: (ha-168000) Calling .DriverName
	I0728 18:32:30.531944    4145 main.go:141] libmachine: (ha-168000) Calling .DriverName
	I0728 18:32:30.532026    4145 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 18:32:30.532059    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHHostname
	I0728 18:32:30.532062    4145 ssh_runner.go:195] Run: cat /version.json
	I0728 18:32:30.532072    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHHostname
	I0728 18:32:30.532154    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHPort
	I0728 18:32:30.532173    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHPort
	I0728 18:32:30.532241    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:30.532268    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:32:30.532343    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHUsername
	I0728 18:32:30.532350    4145 main.go:141] libmachine: (ha-168000) Calling .GetSSHUsername
	I0728 18:32:30.532437    4145 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/id_rsa Username:docker}
	I0728 18:32:30.532450    4145 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/id_rsa Username:docker}
	I0728 18:32:30.563202    4145 ssh_runner.go:195] Run: systemctl --version
	I0728 18:32:30.567923    4145 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0728 18:32:30.614548    4145 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 18:32:30.614643    4145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 18:32:30.628087    4145 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 18:32:30.628098    4145 start.go:495] detecting cgroup driver to use...
	I0728 18:32:30.628200    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:32:30.646891    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 18:32:30.656272    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 18:32:30.665313    4145 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 18:32:30.665358    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 18:32:30.674276    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:32:30.683077    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 18:32:30.692549    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:32:30.702304    4145 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 18:32:30.711577    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 18:32:30.720634    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 18:32:30.729615    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 18:32:30.738472    4145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 18:32:30.746761    4145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 18:32:30.755064    4145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:32:30.848816    4145 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 18:32:30.867766    4145 start.go:495] detecting cgroup driver to use...
	I0728 18:32:30.867842    4145 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 18:32:30.880782    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:32:30.896720    4145 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 18:32:30.915968    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:32:30.927838    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:32:30.938744    4145 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 18:32:30.958206    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:32:30.969775    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:32:30.984812    4145 ssh_runner.go:195] Run: which cri-dockerd
	I0728 18:32:30.987741    4145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 18:32:30.995725    4145 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 18:32:31.009028    4145 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 18:32:31.102972    4145 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 18:32:31.213358    4145 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 18:32:31.213422    4145 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 18:32:31.227351    4145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:32:31.320635    4145 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:33:32.343802    4145 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.024354929s)
	I0728 18:33:32.343863    4145 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0728 18:33:32.380675    4145 out.go:177] 
	W0728 18:33:32.401705    4145 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 01:32:29 ha-168000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:32:29 ha-168000 dockerd[489]: time="2024-07-29T01:32:29.089907398Z" level=info msg="Starting up"
	Jul 29 01:32:29 ha-168000 dockerd[489]: time="2024-07-29T01:32:29.090311518Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 01:32:29 ha-168000 dockerd[489]: time="2024-07-29T01:32:29.091345616Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.106930568Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121566641Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121587467Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121623141Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121633035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121760185Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121801703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121910600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121922658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121931607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121938703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.122022043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.122186120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.123729719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.123782250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.123914334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.123957695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.124116979Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.124168463Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.126636648Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.126701102Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.126738497Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.126771512Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.126802325Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.126868710Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127023086Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127094816Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127182781Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127227769Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127262501Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127292309Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127320795Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127350758Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127381320Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127413722Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127443454Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127481259Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127523867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127558692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127589338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127622258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127653770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127683498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127713264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127742718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127771668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127801809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127830153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127860342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127913860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127955937Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127994300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128026171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128054641Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128128501Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128218408Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128248152Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128277093Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128325443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128407826Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128569754Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128941020Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128996418Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.129026406Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.129228590Z" level=info msg="containerd successfully booted in 0.023033s"
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.119150060Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.152467143Z" level=info msg="Loading containers: start."
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.353531813Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.415313630Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.459251613Z" level=warning msg="error locating sandbox id 7c25cc9a0059d01bc04dbddc310244ae2664b1e6986304779f5fc9823258901a: sandbox 7c25cc9a0059d01bc04dbddc310244ae2664b1e6986304779f5fc9823258901a not found"
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.459634645Z" level=info msg="Loading containers: done."
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.466339328Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.466560343Z" level=info msg="Daemon has completed initialization"
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.488459184Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.488560763Z" level=info msg="API listen on [::]:2376"
	Jul 29 01:32:30 ha-168000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 01:32:31 ha-168000 dockerd[489]: time="2024-07-29T01:32:31.506501314Z" level=info msg="Processing signal 'terminated'"
	Jul 29 01:32:31 ha-168000 dockerd[489]: time="2024-07-29T01:32:31.507480745Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 01:32:31 ha-168000 dockerd[489]: time="2024-07-29T01:32:31.507564079Z" level=info msg="Daemon shutdown complete"
	Jul 29 01:32:31 ha-168000 dockerd[489]: time="2024-07-29T01:32:31.507598097Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 01:32:31 ha-168000 dockerd[489]: time="2024-07-29T01:32:31.507631415Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 01:32:31 ha-168000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 01:32:32 ha-168000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 01:32:32 ha-168000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 01:32:32 ha-168000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:32:32 ha-168000 dockerd[1167]: time="2024-07-29T01:32:32.543678727Z" level=info msg="Starting up"
	Jul 29 01:33:32 ha-168000 dockerd[1167]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 01:33:32 ha-168000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 01:33:32 ha-168000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 01:33:32 ha-168000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 01:32:29 ha-168000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:32:29 ha-168000 dockerd[489]: time="2024-07-29T01:32:29.089907398Z" level=info msg="Starting up"
	Jul 29 01:32:29 ha-168000 dockerd[489]: time="2024-07-29T01:32:29.090311518Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 01:32:29 ha-168000 dockerd[489]: time="2024-07-29T01:32:29.091345616Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.106930568Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121566641Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121587467Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121623141Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121633035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121760185Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121801703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121910600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121922658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121931607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.121938703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.122022043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.122186120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.123729719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.123782250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.123914334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.123957695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.124116979Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.124168463Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.126636648Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.126701102Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.126738497Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.126771512Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.126802325Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.126868710Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127023086Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127094816Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127182781Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127227769Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127262501Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127292309Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127320795Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127350758Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127381320Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127413722Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127443454Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127481259Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127523867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127558692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127589338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127622258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127653770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127683498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127713264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127742718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127771668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127801809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127830153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127860342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127913860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127955937Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.127994300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128026171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128054641Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128128501Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128218408Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128248152Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128277093Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128325443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128407826Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128569754Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128941020Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.128996418Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.129026406Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 01:32:29 ha-168000 dockerd[496]: time="2024-07-29T01:32:29.129228590Z" level=info msg="containerd successfully booted in 0.023033s"
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.119150060Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.152467143Z" level=info msg="Loading containers: start."
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.353531813Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.415313630Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.459251613Z" level=warning msg="error locating sandbox id 7c25cc9a0059d01bc04dbddc310244ae2664b1e6986304779f5fc9823258901a: sandbox 7c25cc9a0059d01bc04dbddc310244ae2664b1e6986304779f5fc9823258901a not found"
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.459634645Z" level=info msg="Loading containers: done."
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.466339328Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.466560343Z" level=info msg="Daemon has completed initialization"
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.488459184Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 01:32:30 ha-168000 dockerd[489]: time="2024-07-29T01:32:30.488560763Z" level=info msg="API listen on [::]:2376"
	Jul 29 01:32:30 ha-168000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 01:32:31 ha-168000 dockerd[489]: time="2024-07-29T01:32:31.506501314Z" level=info msg="Processing signal 'terminated'"
	Jul 29 01:32:31 ha-168000 dockerd[489]: time="2024-07-29T01:32:31.507480745Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 01:32:31 ha-168000 dockerd[489]: time="2024-07-29T01:32:31.507564079Z" level=info msg="Daemon shutdown complete"
	Jul 29 01:32:31 ha-168000 dockerd[489]: time="2024-07-29T01:32:31.507598097Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 01:32:31 ha-168000 dockerd[489]: time="2024-07-29T01:32:31.507631415Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 01:32:31 ha-168000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 01:32:32 ha-168000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 01:32:32 ha-168000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 01:32:32 ha-168000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:32:32 ha-168000 dockerd[1167]: time="2024-07-29T01:32:32.543678727Z" level=info msg="Starting up"
	Jul 29 01:33:32 ha-168000 dockerd[1167]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 01:33:32 ha-168000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 01:33:32 ha-168000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 01:33:32 ha-168000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0728 18:33:32.401834    4145 out.go:239] * 
	* 
	W0728 18:33:32.403293    4145 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:33:32.464787    4145 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-amd64 start -p ha-168000 --wait=true -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-168000 -n ha-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-168000 -n ha-168000: exit status 6 (156.400822ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 18:33:32.665366    4166 status.go:417] kubeconfig endpoint: get endpoint: "ha-168000" does not appear in /Users/jenkins/minikube-integration/19312-1006/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-168000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (76.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-168000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-168000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-168000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-168000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugi
n\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":fa
lse,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-168000 -n ha-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-168000 -n ha-168000: exit status 6 (150.22075ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 18:33:32.987399    4177 status.go:417] kubeconfig endpoint: get endpoint: "ha-168000" does not appear in /Users/jenkins/minikube-integration/19312-1006/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-168000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-168000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p ha-168000 --control-plane -v=7 --alsologtostderr: exit status 83 (151.740414ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-168000-m02 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-168000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:33:33.052475    4182 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:33:33.052748    4182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:33:33.052753    4182 out.go:304] Setting ErrFile to fd 2...
	I0728 18:33:33.052756    4182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:33:33.052935    4182 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 18:33:33.053312    4182 mustload.go:65] Loading cluster: ha-168000
	I0728 18:33:33.053645    4182 config.go:182] Loaded profile config "ha-168000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:33:33.053991    4182 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:33:33.054032    4182 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:33:33.062266    4182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52210
	I0728 18:33:33.062649    4182 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:33:33.063064    4182 main.go:141] libmachine: Using API Version  1
	I0728 18:33:33.063092    4182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:33:33.063350    4182 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:33:33.063478    4182 main.go:141] libmachine: (ha-168000) Calling .GetState
	I0728 18:33:33.063580    4182 main.go:141] libmachine: (ha-168000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:33:33.063647    4182 main.go:141] libmachine: (ha-168000) DBG | hyperkit pid from json: 4158
	I0728 18:33:33.064588    4182 host.go:66] Checking if "ha-168000" exists ...
	I0728 18:33:33.064831    4182 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:33:33.064854    4182 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:33:33.073026    4182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52212
	I0728 18:33:33.073418    4182 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:33:33.073738    4182 main.go:141] libmachine: Using API Version  1
	I0728 18:33:33.073748    4182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:33:33.073995    4182 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:33:33.074118    4182 main.go:141] libmachine: (ha-168000) Calling .DriverName
	I0728 18:33:33.074460    4182 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:33:33.074481    4182 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:33:33.082544    4182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52214
	I0728 18:33:33.082859    4182 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:33:33.083177    4182 main.go:141] libmachine: Using API Version  1
	I0728 18:33:33.083186    4182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:33:33.083389    4182 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:33:33.083592    4182 main.go:141] libmachine: (ha-168000-m02) Calling .GetState
	I0728 18:33:33.083693    4182 main.go:141] libmachine: (ha-168000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:33:33.083773    4182 main.go:141] libmachine: (ha-168000-m02) DBG | hyperkit pid from json: 3798
	I0728 18:33:33.084672    4182 main.go:141] libmachine: (ha-168000-m02) DBG | hyperkit pid 3798 missing from process table
	I0728 18:33:33.105875    4182 out.go:177] * The control-plane node ha-168000-m02 host is not running: state=Stopped
	I0728 18:33:33.127191    4182 out.go:177]   To start a cluster, run: "minikube start -p ha-168000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-amd64 node add -p ha-168000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-168000 -n ha-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-168000 -n ha-168000: exit status 6 (146.088131ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 18:33:33.285939    4187 status.go:417] kubeconfig endpoint: get endpoint: "ha-168000" does not appear in /Users/jenkins/minikube-integration/19312-1006/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-168000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:304: expected profile "ha-168000" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-168000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-168000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServe
rPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-168000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion\
":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\
":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMet
rics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
ha_test.go:307: expected profile "ha-168000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-168000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-168000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-168000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Kuber
netesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\"
:false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false
,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-168000 -n ha-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-168000 -n ha-168000: exit status 6 (147.96811ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 18:33:33.604102    4198 status.go:417] kubeconfig endpoint: get endpoint: "ha-168000" does not appear in /Users/jenkins/minikube-integration/19312-1006/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-168000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (137.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-925000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-925000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m17.114967886s)

                                                
                                                
-- stdout --
	* [mount-start-1-925000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-925000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-925000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:e1:fc:92:52:eb
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-925000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 52:b3:ae:e8:7a:de
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 52:b3:ae:e8:7a:de
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-925000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-925000 -n mount-start-1-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-925000 -n mount-start-1-925000: exit status 7 (78.163488ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 18:39:17.163217    4440 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0728 18:39:17.163243    4440 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-925000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (137.19s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (79.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-362000 -v 3 --alsologtostderr
E0728 18:42:24.062692    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-362000 -v 3 --alsologtostderr: exit status 90 (1m16.443422054s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-362000 as [worker]
	* Starting "multinode-362000-m03" worker node in "multinode-362000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:41:21.398130    4546 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:41:21.398329    4546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:41:21.398334    4546 out.go:304] Setting ErrFile to fd 2...
	I0728 18:41:21.398338    4546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:41:21.398517    4546 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 18:41:21.398847    4546 mustload.go:65] Loading cluster: multinode-362000
	I0728 18:41:21.399134    4546 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:41:21.399509    4546 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:41:21.399550    4546 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:41:21.407810    4546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52625
	I0728 18:41:21.408218    4546 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:41:21.408644    4546 main.go:141] libmachine: Using API Version  1
	I0728 18:41:21.408653    4546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:41:21.408931    4546 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:41:21.409057    4546 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:41:21.409149    4546 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:41:21.409217    4546 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:41:21.410166    4546 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:41:21.410414    4546 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:41:21.410436    4546 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:41:21.418640    4546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52627
	I0728 18:41:21.418978    4546 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:41:21.419294    4546 main.go:141] libmachine: Using API Version  1
	I0728 18:41:21.419305    4546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:41:21.419547    4546 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:41:21.419655    4546 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:41:21.419756    4546 api_server.go:166] Checking apiserver status ...
	I0728 18:41:21.419821    4546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:41:21.419840    4546 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:41:21.419928    4546 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:41:21.420008    4546 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:41:21.420080    4546 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:41:21.420165    4546 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:41:21.456513    4546 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2038/cgroup
	W0728 18:41:21.463672    4546 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2038/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0728 18:41:21.463716    4546 ssh_runner.go:195] Run: ls
	I0728 18:41:21.466999    4546 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:41:21.470903    4546 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0728 18:41:21.492211    4546 out.go:177] * Adding node m03 to cluster multinode-362000 as [worker]
	I0728 18:41:21.513657    4546 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:41:21.513823    4546 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:41:21.535092    4546 out.go:177] * Starting "multinode-362000-m03" worker node in "multinode-362000" cluster
	I0728 18:41:21.555920    4546 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:41:21.555958    4546 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 18:41:21.555977    4546 cache.go:56] Caching tarball of preloaded images
	I0728 18:41:21.556130    4546 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 18:41:21.556143    4546 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:41:21.556231    4546 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:41:21.556806    4546 start.go:360] acquireMachinesLock for multinode-362000-m03: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:41:21.556891    4546 start.go:364] duration metric: took 61.431µs to acquireMachinesLock for "multinode-362000-m03"
	I0728 18:41:21.556917    4546 start.go:93] Provisioning new machine with config: &{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0728 18:41:21.557032    4546 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0728 18:41:21.578109    4546 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:41:21.578320    4546 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:41:21.578355    4546 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:41:21.587649    4546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52631
	I0728 18:41:21.588012    4546 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:41:21.588351    4546 main.go:141] libmachine: Using API Version  1
	I0728 18:41:21.588363    4546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:41:21.588573    4546 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:41:21.588674    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetMachineName
	I0728 18:41:21.588763    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:41:21.588858    4546 start.go:159] libmachine.API.Create for "multinode-362000" (driver="hyperkit")
	I0728 18:41:21.588875    4546 client.go:168] LocalClient.Create starting
	I0728 18:41:21.588911    4546 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem
	I0728 18:41:21.588957    4546 main.go:141] libmachine: Decoding PEM data...
	I0728 18:41:21.588968    4546 main.go:141] libmachine: Parsing certificate...
	I0728 18:41:21.589027    4546 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem
	I0728 18:41:21.589056    4546 main.go:141] libmachine: Decoding PEM data...
	I0728 18:41:21.589066    4546 main.go:141] libmachine: Parsing certificate...
	I0728 18:41:21.589084    4546 main.go:141] libmachine: Running pre-create checks...
	I0728 18:41:21.589089    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .PreCreateCheck
	I0728 18:41:21.589177    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:41:21.589223    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetConfigRaw
	I0728 18:41:21.589697    4546 main.go:141] libmachine: Creating machine...
	I0728 18:41:21.589706    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .Create
	I0728 18:41:21.589773    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:41:21.589894    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | I0728 18:41:21.589770    4550 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 18:41:21.589963    4546 main.go:141] libmachine: (multinode-362000-m03) Downloading /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1006/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0728 18:41:21.798156    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | I0728 18:41:21.798056    4550 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/id_rsa...
	I0728 18:41:22.049583    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | I0728 18:41:22.049499    4550 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/multinode-362000-m03.rawdisk...
	I0728 18:41:22.049596    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Writing magic tar header
	I0728 18:41:22.049610    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Writing SSH key tar header
	I0728 18:41:22.050441    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | I0728 18:41:22.050347    4550 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03 ...
	I0728 18:41:22.519409    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:41:22.519433    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/hyperkit.pid
	I0728 18:41:22.519520    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Using UUID 5cda4f36-38f7-4c06-808b-dbe144e26e44
	I0728 18:41:22.545885    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Generated MAC 3e:8b:c4:58:a6:30
	I0728 18:41:22.545917    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000
	I0728 18:41:22.545972    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5cda4f36-38f7-4c06-808b-dbe144e26e44", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0728 18:41:22.546007    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5cda4f36-38f7-4c06-808b-dbe144e26e44", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0728 18:41:22.546049    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5cda4f36-38f7-4c06-808b-dbe144e26e44", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/multinode-362000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/bzimage,/Users/j
enkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"}
	I0728 18:41:22.546097    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5cda4f36-38f7-4c06-808b-dbe144e26e44 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/multinode-362000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/mult
inode-362000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"
	I0728 18:41:22.546112    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 18:41:22.548996    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:22 DEBUG: hyperkit: Pid is 4551
	I0728 18:41:22.549620    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Attempt 0
	I0728 18:41:22.549637    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:41:22.549734    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | hyperkit pid from json: 4551
	I0728 18:41:22.550673    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Searching for 3e:8b:c4:58:a6:30 in /var/db/dhcpd_leases ...
	I0728 18:41:22.550760    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0728 18:41:22.550777    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a84496}
	I0728 18:41:22.550806    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:41:22.550825    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:41:22.550841    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:41:22.550858    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:41:22.550869    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:41:22.550885    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:41:22.550898    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:41:22.550913    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:41:22.550929    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:41:22.550944    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:41:22.550956    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:41:22.550971    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:41:22.556499    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:22 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 18:41:22.564581    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 18:41:22.565410    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:41:22.565455    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:41:22.565478    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:41:22.565502    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:41:22.954258    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 18:41:22.954273    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 18:41:23.069168    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:41:23.069189    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:41:23.069204    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:41:23.069216    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:41:23.070033    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 18:41:23.070042    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 18:41:24.635835    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Attempt 1
	I0728 18:41:24.635851    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:41:24.635913    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | hyperkit pid from json: 4551
	I0728 18:41:24.636692    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Searching for 3e:8b:c4:58:a6:30 in /var/db/dhcpd_leases ...
	I0728 18:41:24.636741    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0728 18:41:24.636757    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a84496}
	I0728 18:41:24.636778    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:41:24.636787    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:41:24.636795    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:41:24.636804    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:41:24.636819    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:41:24.636833    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:41:24.636842    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:41:24.636850    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:41:24.636858    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:41:24.636866    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:41:24.636875    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:41:24.636883    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:41:26.637828    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Attempt 2
	I0728 18:41:26.637846    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:41:26.637893    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | hyperkit pid from json: 4551
	I0728 18:41:26.638771    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Searching for 3e:8b:c4:58:a6:30 in /var/db/dhcpd_leases ...
	I0728 18:41:26.638790    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0728 18:41:26.638806    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a84496}
	I0728 18:41:26.638816    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:41:26.638823    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:41:26.638833    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:41:26.638845    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:41:26.638883    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:41:26.638894    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:41:26.638900    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:41:26.638907    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:41:26.638919    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:41:26.638930    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:41:26.638939    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:41:26.638955    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:41:28.639790    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Attempt 3
	I0728 18:41:28.639807    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:41:28.639875    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | hyperkit pid from json: 4551
	I0728 18:41:28.640669    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Searching for 3e:8b:c4:58:a6:30 in /var/db/dhcpd_leases ...
	I0728 18:41:28.640693    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0728 18:41:28.640701    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a84496}
	I0728 18:41:28.640709    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:41:28.640716    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:41:28.640722    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:41:28.640728    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:41:28.640735    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:41:28.640751    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:41:28.640758    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:41:28.640763    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:41:28.640780    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:41:28.640804    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:41:28.640811    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:41:28.640820    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:41:28.778868    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0728 18:41:28.778937    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0728 18:41:28.778956    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0728 18:41:28.801779    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:41:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0728 18:41:30.641146    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Attempt 4
	I0728 18:41:30.641169    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:41:30.641258    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | hyperkit pid from json: 4551
	I0728 18:41:30.642046    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Searching for 3e:8b:c4:58:a6:30 in /var/db/dhcpd_leases ...
	I0728 18:41:30.642124    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0728 18:41:30.642134    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a84496}
	I0728 18:41:30.642141    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:41:30.642149    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:41:30.642156    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:41:30.642165    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:41:30.642183    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:41:30.642192    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:41:30.642200    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:41:30.642208    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:41:30.642215    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:41:30.642221    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:41:30.642228    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:41:30.642240    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:41:32.643344    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Attempt 5
	I0728 18:41:32.643358    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:41:32.643448    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | hyperkit pid from json: 4551
	I0728 18:41:32.644256    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Searching for 3e:8b:c4:58:a6:30 in /var/db/dhcpd_leases ...
	I0728 18:41:32.644329    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0728 18:41:32.644353    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a844cb}
	I0728 18:41:32.644368    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | Found match: 3e:8b:c4:58:a6:30
	I0728 18:41:32.644381    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | IP: 192.169.0.15
	I0728 18:41:32.644427    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetConfigRaw
	I0728 18:41:32.645032    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:41:32.645142    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:41:32.645234    4546 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0728 18:41:32.645243    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetState
	I0728 18:41:32.645317    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:41:32.645375    4546 main.go:141] libmachine: (multinode-362000-m03) DBG | hyperkit pid from json: 4551
	I0728 18:41:32.646119    4546 main.go:141] libmachine: Detecting operating system of created instance...
	I0728 18:41:32.646130    4546 main.go:141] libmachine: Waiting for SSH to be available...
	I0728 18:41:32.646137    4546 main.go:141] libmachine: Getting to WaitForSSH function...
	I0728 18:41:32.646142    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:41:32.646231    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:41:32.646325    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:32.646403    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:32.646495    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:41:32.646610    4546 main.go:141] libmachine: Using SSH client type: native
	I0728 18:41:32.646825    4546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6360c0] 0xb638e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:41:32.646832    4546 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0728 18:41:33.706956    4546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:41:33.706969    4546 main.go:141] libmachine: Detecting the provisioner...
	I0728 18:41:33.706975    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:41:33.707100    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:41:33.707200    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:33.707302    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:33.707398    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:41:33.707525    4546 main.go:141] libmachine: Using SSH client type: native
	I0728 18:41:33.707664    4546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6360c0] 0xb638e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:41:33.707672    4546 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0728 18:41:33.769676    4546 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0728 18:41:33.769736    4546 main.go:141] libmachine: found compatible host: buildroot
	I0728 18:41:33.769743    4546 main.go:141] libmachine: Provisioning with buildroot...
	I0728 18:41:33.769748    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetMachineName
	I0728 18:41:33.769886    4546 buildroot.go:166] provisioning hostname "multinode-362000-m03"
	I0728 18:41:33.769898    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetMachineName
	I0728 18:41:33.769997    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:41:33.770084    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:41:33.770178    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:33.770278    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:33.770358    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:41:33.770492    4546 main.go:141] libmachine: Using SSH client type: native
	I0728 18:41:33.770635    4546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6360c0] 0xb638e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:41:33.770644    4546 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-362000-m03 && echo "multinode-362000-m03" | sudo tee /etc/hostname
	I0728 18:41:33.849952    4546 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-362000-m03
	
	I0728 18:41:33.849974    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:41:33.850105    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:41:33.850202    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:33.850297    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:33.850381    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:41:33.850516    4546 main.go:141] libmachine: Using SSH client type: native
	I0728 18:41:33.850655    4546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6360c0] 0xb638e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:41:33.850667    4546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-362000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-362000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-362000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 18:41:33.918248    4546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:41:33.918269    4546 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 18:41:33.918287    4546 buildroot.go:174] setting up certificates
	I0728 18:41:33.918300    4546 provision.go:84] configureAuth start
	I0728 18:41:33.918309    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetMachineName
	I0728 18:41:33.918448    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetIP
	I0728 18:41:33.918566    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:41:33.918657    4546 provision.go:143] copyHostCerts
	I0728 18:41:33.918737    4546 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 18:41:33.918746    4546 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:41:33.918894    4546 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 18:41:33.919130    4546 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 18:41:33.919136    4546 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:41:33.919221    4546 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 18:41:33.919411    4546 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 18:41:33.919424    4546 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:41:33.919533    4546 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 18:41:33.919696    4546 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.multinode-362000-m03 san=[127.0.0.1 192.169.0.15 localhost minikube multinode-362000-m03]
	I0728 18:41:33.988385    4546 provision.go:177] copyRemoteCerts
	I0728 18:41:33.988439    4546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 18:41:33.988459    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:41:33.988600    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:41:33.988694    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:33.988785    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:41:33.988864    4546 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/id_rsa Username:docker}
	I0728 18:41:34.026268    4546 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 18:41:34.045997    4546 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 18:41:34.065655    4546 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0728 18:41:34.085165    4546 provision.go:87] duration metric: took 166.849966ms to configureAuth
	I0728 18:41:34.085181    4546 buildroot.go:189] setting minikube options for container-runtime
	I0728 18:41:34.085350    4546 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:41:34.085364    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:41:34.085495    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:41:34.085583    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:41:34.085675    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:34.085780    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:34.085861    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:41:34.085967    4546 main.go:141] libmachine: Using SSH client type: native
	I0728 18:41:34.086097    4546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6360c0] 0xb638e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:41:34.086105    4546 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 18:41:34.149271    4546 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 18:41:34.149284    4546 buildroot.go:70] root file system type: tmpfs
	I0728 18:41:34.149371    4546 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 18:41:34.149384    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:41:34.149528    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:41:34.149615    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:34.149718    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:34.149819    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:41:34.149972    4546 main.go:141] libmachine: Using SSH client type: native
	I0728 18:41:34.150119    4546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6360c0] 0xb638e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:41:34.150169    4546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 18:41:34.221343    4546 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 18:41:34.221366    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:41:34.221489    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:41:34.221574    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:34.221668    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:34.221770    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:41:34.221885    4546 main.go:141] libmachine: Using SSH client type: native
	I0728 18:41:34.222051    4546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6360c0] 0xb638e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:41:34.222063    4546 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 18:41:35.773854    4546 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0728 18:41:35.773868    4546 main.go:141] libmachine: Checking connection to Docker...
	I0728 18:41:35.773875    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetURL
	I0728 18:41:35.774011    4546 main.go:141] libmachine: Docker is up and running!
	I0728 18:41:35.774020    4546 main.go:141] libmachine: Reticulating splines...
	I0728 18:41:35.774025    4546 client.go:171] duration metric: took 14.101492775s to LocalClient.Create
	I0728 18:41:35.774037    4546 start.go:167] duration metric: took 14.10153238s to libmachine.API.Create "multinode-362000"
	I0728 18:41:35.774047    4546 start.go:293] postStartSetup for "multinode-362000-m03" (driver="hyperkit")
	I0728 18:41:35.774055    4546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 18:41:35.774065    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:41:35.774209    4546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 18:41:35.774225    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:41:35.774322    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:41:35.774405    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:35.774488    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:41:35.774572    4546 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/id_rsa Username:docker}
	I0728 18:41:35.815556    4546 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 18:41:35.819092    4546 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 18:41:35.819109    4546 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 18:41:35.819220    4546 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 18:41:35.819373    4546 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 18:41:35.819539    4546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 18:41:35.829465    4546 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:41:35.857718    4546 start.go:296] duration metric: took 83.661383ms for postStartSetup
	I0728 18:41:35.857749    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetConfigRaw
	I0728 18:41:35.858358    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetIP
	I0728 18:41:35.858516    4546 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:41:35.858852    4546 start.go:128] duration metric: took 14.218157266s to createHost
	I0728 18:41:35.858866    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:41:35.858959    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:41:35.859039    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:35.859118    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:35.859192    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:41:35.859295    4546 main.go:141] libmachine: Using SSH client type: native
	I0728 18:41:35.859427    4546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6360c0] 0xb638e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:41:35.859434    4546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0728 18:41:35.921876    4546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722217296.043075879
	
	I0728 18:41:35.921889    4546 fix.go:216] guest clock: 1722217296.043075879
	I0728 18:41:35.921902    4546 fix.go:229] Guest: 2024-07-28 18:41:36.043075879 -0700 PDT Remote: 2024-07-28 18:41:35.85886 -0700 PDT m=+14.412270388 (delta=184.215879ms)
	I0728 18:41:35.921920    4546 fix.go:200] guest clock delta is within tolerance: 184.215879ms
	I0728 18:41:35.921924    4546 start.go:83] releasing machines lock for "multinode-362000-m03", held for 14.281375895s
	I0728 18:41:35.921942    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:41:35.922064    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetIP
	I0728 18:41:35.922159    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:41:35.922445    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:41:35.922552    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:41:35.922634    4546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 18:41:35.922666    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:41:35.922719    4546 ssh_runner.go:195] Run: systemctl --version
	I0728 18:41:35.922736    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:41:35.922749    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:41:35.922833    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:41:35.922847    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:35.922947    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:41:35.922963    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:41:35.923045    4546 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:41:35.923066    4546 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/id_rsa Username:docker}
	I0728 18:41:35.923135    4546 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/id_rsa Username:docker}
	I0728 18:41:35.956434    4546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0728 18:41:36.006812    4546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 18:41:36.006878    4546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 18:41:36.020577    4546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 18:41:36.020592    4546 start.go:495] detecting cgroup driver to use...
	I0728 18:41:36.020694    4546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:41:36.035545    4546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 18:41:36.044392    4546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 18:41:36.053308    4546 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 18:41:36.053372    4546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 18:41:36.062494    4546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:41:36.071494    4546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 18:41:36.080524    4546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:41:36.089358    4546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 18:41:36.098694    4546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 18:41:36.107874    4546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 18:41:36.116779    4546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 18:41:36.125656    4546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 18:41:36.133628    4546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 18:41:36.141682    4546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:41:36.249152    4546 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 18:41:36.268868    4546 start.go:495] detecting cgroup driver to use...
	I0728 18:41:36.268947    4546 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 18:41:36.290243    4546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:41:36.306061    4546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 18:41:36.322947    4546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:41:36.334159    4546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:41:36.345001    4546 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 18:41:36.364134    4546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:41:36.374868    4546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:41:36.390283    4546 ssh_runner.go:195] Run: which cri-dockerd
	I0728 18:41:36.393336    4546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 18:41:36.400729    4546 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 18:41:36.414408    4546 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 18:41:36.510004    4546 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 18:41:36.626988    4546 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 18:41:36.627062    4546 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 18:41:36.641900    4546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:41:36.744758    4546 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:42:37.767024    4546 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.0226663s)
	I0728 18:42:37.767087    4546 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0728 18:42:37.802737    4546 out.go:177] 
	W0728 18:42:37.824493    4546 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 01:41:34 multinode-362000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:41:34 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:34.658124141Z" level=info msg="Starting up"
	Jul 29 01:41:34 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:34.658788905Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 01:41:34 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:34.659282372Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=520
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.676494679Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691301890Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691327454Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691366208Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691376754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691433756Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691489901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691624776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691658740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691670762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691677667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691735357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691956822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.693550051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.693585134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.693674129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.693706552Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.693768267Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.693828639Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696030860Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696081785Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696095509Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696110628Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696121812Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696182578Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696344460Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696442029Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696476862Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696488061Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696499404Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696509104Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696516685Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696526076Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696535096Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696543301Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696551506Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696558772Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696571527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696580666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696590623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696599945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696608935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696617022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696624460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696632146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696640157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696648964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696656202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696664391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696672576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696682624Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696695670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696704133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696712320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696740605Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696751144Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696758145Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696766182Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696773233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696780822Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696826813Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696952494Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.697005238Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.697030387Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.697041952Z" level=info msg="containerd successfully booted in 0.021254s"
	Jul 29 01:41:35 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:35.685575597Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 01:41:35 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:35.690035092Z" level=info msg="Loading containers: start."
	Jul 29 01:41:35 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:35.771963868Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 01:41:35 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:35.853658368Z" level=info msg="Loading containers: done."
	Jul 29 01:41:35 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:35.865278840Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 01:41:35 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:35.866497311Z" level=info msg="Daemon has completed initialization"
	Jul 29 01:41:35 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:35.891919270Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 01:41:35 multinode-362000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 29 01:41:35 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:35.892089237Z" level=info msg="API listen on [::]:2376"
	Jul 29 01:41:36 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:36.878725673Z" level=info msg="Processing signal 'terminated'"
	Jul 29 01:41:36 multinode-362000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 01:41:36 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:36.879978427Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 01:41:36 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:36.880258671Z" level=info msg="Daemon shutdown complete"
	Jul 29 01:41:36 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:36.880391061Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 01:41:36 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:36.880425238Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 01:41:37 multinode-362000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 01:41:37 multinode-362000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 01:41:37 multinode-362000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:41:37 multinode-362000-m03 dockerd[921]: time="2024-07-29T01:41:37.916689514Z" level=info msg="Starting up"
	Jul 29 01:42:37 multinode-362000-m03 dockerd[921]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 01:42:37 multinode-362000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 01:42:37 multinode-362000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 01:42:37 multinode-362000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 01:41:34 multinode-362000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:41:34 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:34.658124141Z" level=info msg="Starting up"
	Jul 29 01:41:34 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:34.658788905Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 01:41:34 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:34.659282372Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=520
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.676494679Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691301890Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691327454Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691366208Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691376754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691433756Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691489901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691624776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691658740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691670762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691677667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691735357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.691956822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.693550051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.693585134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.693674129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.693706552Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.693768267Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.693828639Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696030860Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696081785Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696095509Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696110628Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696121812Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696182578Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696344460Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696442029Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696476862Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696488061Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696499404Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696509104Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696516685Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696526076Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696535096Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696543301Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696551506Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696558772Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696571527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696580666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696590623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696599945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696608935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696617022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696624460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696632146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696640157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696648964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696656202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696664391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696672576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696682624Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696695670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696704133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696712320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696740605Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696751144Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696758145Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696766182Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696773233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696780822Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696826813Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.696952494Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.697005238Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.697030387Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 01:41:34 multinode-362000-m03 dockerd[520]: time="2024-07-29T01:41:34.697041952Z" level=info msg="containerd successfully booted in 0.021254s"
	Jul 29 01:41:35 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:35.685575597Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 01:41:35 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:35.690035092Z" level=info msg="Loading containers: start."
	Jul 29 01:41:35 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:35.771963868Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 01:41:35 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:35.853658368Z" level=info msg="Loading containers: done."
	Jul 29 01:41:35 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:35.865278840Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 01:41:35 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:35.866497311Z" level=info msg="Daemon has completed initialization"
	Jul 29 01:41:35 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:35.891919270Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 01:41:35 multinode-362000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 29 01:41:35 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:35.892089237Z" level=info msg="API listen on [::]:2376"
	Jul 29 01:41:36 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:36.878725673Z" level=info msg="Processing signal 'terminated'"
	Jul 29 01:41:36 multinode-362000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 01:41:36 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:36.879978427Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 01:41:36 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:36.880258671Z" level=info msg="Daemon shutdown complete"
	Jul 29 01:41:36 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:36.880391061Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 01:41:36 multinode-362000-m03 dockerd[514]: time="2024-07-29T01:41:36.880425238Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 01:41:37 multinode-362000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 01:41:37 multinode-362000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 01:41:37 multinode-362000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:41:37 multinode-362000-m03 dockerd[921]: time="2024-07-29T01:41:37.916689514Z" level=info msg="Starting up"
	Jul 29 01:42:37 multinode-362000-m03 dockerd[921]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 01:42:37 multinode-362000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 01:42:37 multinode-362000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 01:42:37 multinode-362000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0728 18:42:37.824614    4546 out.go:239] * 
	* 
	W0728 18:42:37.828569    4546 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:42:37.849295    4546 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-362000 -v 3 --alsologtostderr" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-362000 -n multinode-362000
helpers_test.go:244: <<< TestMultiNode/serial/AddNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/AddNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-362000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-362000 logs -n 25: (2.264125726s)
helpers_test.go:252: TestMultiNode/serial/AddNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p json-output-error-327000                       | json-output-error-327000 | jenkins | v1.33.1 | 28 Jul 24 18:35 PDT | 28 Jul 24 18:35 PDT |
	| start   | -p first-332000                                   | first-332000             | jenkins | v1.33.1 | 28 Jul 24 18:35 PDT | 28 Jul 24 18:36 PDT |
	|         | --driver=hyperkit                                 |                          |         |         |                     |                     |
	| start   | -p second-335000                                  | second-335000            | jenkins | v1.33.1 | 28 Jul 24 18:36 PDT | 28 Jul 24 18:36 PDT |
	|         | --driver=hyperkit                                 |                          |         |         |                     |                     |
	| delete  | -p second-335000                                  | second-335000            | jenkins | v1.33.1 | 28 Jul 24 18:36 PDT | 28 Jul 24 18:36 PDT |
	| delete  | -p first-332000                                   | first-332000             | jenkins | v1.33.1 | 28 Jul 24 18:36 PDT | 28 Jul 24 18:36 PDT |
	| start   | -p mount-start-1-925000                           | mount-start-1-925000     | jenkins | v1.33.1 | 28 Jul 24 18:37 PDT |                     |
	|         | --memory=2048 --mount                             |                          |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                          |         |         |                     |                     |
	|         | 6543 --mount-port 46464                           |                          |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                          |         |         |                     |                     |
	|         | --driver=hyperkit                                 |                          |         |         |                     |                     |
	| delete  | -p mount-start-2-934000                           | mount-start-2-934000     | jenkins | v1.33.1 | 28 Jul 24 18:39 PDT | 28 Jul 24 18:39 PDT |
	| delete  | -p mount-start-1-925000                           | mount-start-1-925000     | jenkins | v1.33.1 | 28 Jul 24 18:39 PDT | 28 Jul 24 18:39 PDT |
	| start   | -p multinode-362000                               | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:39 PDT | 28 Jul 24 18:41 PDT |
	|         | --wait=true --memory=2200                         |                          |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                          |         |         |                     |                     |
	|         | --alsologtostderr                                 |                          |         |         |                     |                     |
	|         | --driver=hyperkit                                 |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- apply -f                   | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- rollout                    | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | status deployment/busybox                         |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- get pods -o                | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- get pods -o                | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-8hq8g --                        |                          |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-svnlx --                        |                          |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-8hq8g --                        |                          |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-svnlx --                        |                          |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-8hq8g -- nslookup               |                          |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-svnlx -- nslookup               |                          |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- get pods -o                | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-8hq8g                           |                          |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                          |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                          |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-8hq8g -- sh                     |                          |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1                          |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-svnlx                           |                          |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                          |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                          |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-svnlx -- sh                     |                          |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1                          |                          |         |         |                     |                     |
	| node    | add -p multinode-362000 -v 3                      | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT |                     |
	|         | --alsologtostderr                                 |                          |         |         |                     |                     |
	|---------|---------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/28 18:39:22
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 18:39:22.678257    4457 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:39:22.678427    4457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:39:22.678433    4457 out.go:304] Setting ErrFile to fd 2...
	I0728 18:39:22.678437    4457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:39:22.678623    4457 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 18:39:22.680060    4457 out.go:298] Setting JSON to false
	I0728 18:39:22.702282    4457 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4133,"bootTime":1722213029,"procs":426,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 18:39:22.702372    4457 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:39:22.725628    4457 out.go:177] * [multinode-362000] minikube v1.33.1 on Darwin 14.5
	I0728 18:39:22.766545    4457 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:39:22.766600    4457 notify.go:220] Checking for updates...
	I0728 18:39:22.809590    4457 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:39:22.830413    4457 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 18:39:22.851674    4457 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:39:22.872676    4457 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 18:39:22.893395    4457 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:39:22.914569    4457 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:39:22.943475    4457 out.go:177] * Using the hyperkit driver based on user configuration
	I0728 18:39:22.985625    4457 start.go:297] selected driver: hyperkit
	I0728 18:39:22.985654    4457 start.go:901] validating driver "hyperkit" against <nil>
	I0728 18:39:22.985674    4457 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:39:22.990010    4457 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:39:22.990130    4457 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 18:39:22.998308    4457 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 18:39:23.002111    4457 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:39:23.002130    4457 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 18:39:23.002159    4457 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:39:23.002374    4457 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:39:23.002401    4457 cni.go:84] Creating CNI manager for ""
	I0728 18:39:23.002410    4457 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0728 18:39:23.002415    4457 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0728 18:39:23.002489    4457 start.go:340] cluster config:
	{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:39:23.002577    4457 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:39:23.044533    4457 out.go:177] * Starting "multinode-362000" primary control-plane node in "multinode-362000" cluster
	I0728 18:39:23.065376    4457 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:39:23.065471    4457 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 18:39:23.065514    4457 cache.go:56] Caching tarball of preloaded images
	I0728 18:39:23.065727    4457 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 18:39:23.065745    4457 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:39:23.066249    4457 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:39:23.066294    4457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json: {Name:mk76e134289e3e0202375db08bfa8f62ca33bf04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:39:23.066977    4457 start.go:360] acquireMachinesLock for multinode-362000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:39:23.067102    4457 start.go:364] duration metric: took 102.049µs to acquireMachinesLock for "multinode-362000"
	I0728 18:39:23.067147    4457 start.go:93] Provisioning new machine with config: &{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:39:23.067240    4457 start.go:125] createHost starting for "" (driver="hyperkit")
	I0728 18:39:23.109453    4457 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:39:23.109706    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:39:23.109768    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:39:23.119748    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52515
	I0728 18:39:23.120088    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:39:23.120492    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:39:23.120503    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:39:23.120705    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:39:23.120831    4457 main.go:141] libmachine: (multinode-362000) Calling .GetMachineName
	I0728 18:39:23.120933    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:23.121051    4457 start.go:159] libmachine.API.Create for "multinode-362000" (driver="hyperkit")
	I0728 18:39:23.121074    4457 client.go:168] LocalClient.Create starting
	I0728 18:39:23.121106    4457 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem
	I0728 18:39:23.121154    4457 main.go:141] libmachine: Decoding PEM data...
	I0728 18:39:23.121168    4457 main.go:141] libmachine: Parsing certificate...
	I0728 18:39:23.121227    4457 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem
	I0728 18:39:23.121267    4457 main.go:141] libmachine: Decoding PEM data...
	I0728 18:39:23.121279    4457 main.go:141] libmachine: Parsing certificate...
	I0728 18:39:23.121292    4457 main.go:141] libmachine: Running pre-create checks...
	I0728 18:39:23.121299    4457 main.go:141] libmachine: (multinode-362000) Calling .PreCreateCheck
	I0728 18:39:23.121386    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:23.121581    4457 main.go:141] libmachine: (multinode-362000) Calling .GetConfigRaw
	I0728 18:39:23.122075    4457 main.go:141] libmachine: Creating machine...
	I0728 18:39:23.122083    4457 main.go:141] libmachine: (multinode-362000) Calling .Create
	I0728 18:39:23.122160    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:23.122282    4457 main.go:141] libmachine: (multinode-362000) DBG | I0728 18:39:23.122156    4465 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 18:39:23.122334    4457 main.go:141] libmachine: (multinode-362000) Downloading /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1006/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0728 18:39:23.302999    4457 main.go:141] libmachine: (multinode-362000) DBG | I0728 18:39:23.302937    4465 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa...
	I0728 18:39:23.527292    4457 main.go:141] libmachine: (multinode-362000) DBG | I0728 18:39:23.527206    4465 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/multinode-362000.rawdisk...
	I0728 18:39:23.527305    4457 main.go:141] libmachine: (multinode-362000) DBG | Writing magic tar header
	I0728 18:39:23.527315    4457 main.go:141] libmachine: (multinode-362000) DBG | Writing SSH key tar header
	I0728 18:39:23.528110    4457 main.go:141] libmachine: (multinode-362000) DBG | I0728 18:39:23.527999    4465 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000 ...
	I0728 18:39:23.900948    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:23.900977    4457 main.go:141] libmachine: (multinode-362000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/hyperkit.pid
	I0728 18:39:23.901065    4457 main.go:141] libmachine: (multinode-362000) DBG | Using UUID 8122a2e4-0139-4f45-b808-288a2b40595b
	I0728 18:39:24.010965    4457 main.go:141] libmachine: (multinode-362000) DBG | Generated MAC e:8c:86:9:55:cf
	I0728 18:39:24.010982    4457 main.go:141] libmachine: (multinode-362000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000
	I0728 18:39:24.011023    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8122a2e4-0139-4f45-b808-288a2b40595b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a540)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0728 18:39:24.011055    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8122a2e4-0139-4f45-b808-288a2b40595b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a540)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0728 18:39:24.011094    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8122a2e4-0139-4f45-b808-288a2b40595b", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/multinode-362000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/bzimage,/Users/jenkins/minikube-integration/1931
2-1006/.minikube/machines/multinode-362000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"}
	I0728 18:39:24.011203    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8122a2e4-0139-4f45-b808-288a2b40595b -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/multinode-362000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"
	I0728 18:39:24.011235    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 18:39:24.014088    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 DEBUG: hyperkit: Pid is 4468
	I0728 18:39:24.014484    4457 main.go:141] libmachine: (multinode-362000) DBG | Attempt 0
	I0728 18:39:24.014494    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:24.014570    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:39:24.015422    4457 main.go:141] libmachine: (multinode-362000) DBG | Searching for e:8c:86:9:55:cf in /var/db/dhcpd_leases ...
	I0728 18:39:24.015502    4457 main.go:141] libmachine: (multinode-362000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0728 18:39:24.015525    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:39:24.015549    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:39:24.015586    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:39:24.015599    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:39:24.015607    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:39:24.015614    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:39:24.015628    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:39:24.015637    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:39:24.015650    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:39:24.015660    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:39:24.015693    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:39:24.021389    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 18:39:24.069500    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 18:39:24.070254    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:39:24.070278    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:39:24.070299    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:39:24.070313    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:39:24.456576    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 18:39:24.456592    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 18:39:24.571109    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:39:24.571130    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:39:24.571151    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:39:24.571163    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:39:24.572087    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 18:39:24.572106    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 18:39:26.015894    4457 main.go:141] libmachine: (multinode-362000) DBG | Attempt 1
	I0728 18:39:26.015907    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:26.015917    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:39:26.016731    4457 main.go:141] libmachine: (multinode-362000) DBG | Searching for e:8c:86:9:55:cf in /var/db/dhcpd_leases ...
	I0728 18:39:26.016759    4457 main.go:141] libmachine: (multinode-362000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0728 18:39:26.016773    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:39:26.016783    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:39:26.016791    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:39:26.016808    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:39:26.016824    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:39:26.016832    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:39:26.016839    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:39:26.016846    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:39:26.016859    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:39:26.016866    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:39:26.016874    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:39:28.018702    4457 main.go:141] libmachine: (multinode-362000) DBG | Attempt 2
	I0728 18:39:28.018721    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:28.018814    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:39:28.019707    4457 main.go:141] libmachine: (multinode-362000) DBG | Searching for e:8c:86:9:55:cf in /var/db/dhcpd_leases ...
	I0728 18:39:28.019765    4457 main.go:141] libmachine: (multinode-362000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0728 18:39:28.019776    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:39:28.019785    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:39:28.019795    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:39:28.019811    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:39:28.019837    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:39:28.019848    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:39:28.019854    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:39:28.019861    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:39:28.019879    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:39:28.019907    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:39:28.019921    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:39:30.020766    4457 main.go:141] libmachine: (multinode-362000) DBG | Attempt 3
	I0728 18:39:30.020783    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:30.020873    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:39:30.021728    4457 main.go:141] libmachine: (multinode-362000) DBG | Searching for e:8c:86:9:55:cf in /var/db/dhcpd_leases ...
	I0728 18:39:30.021757    4457 main.go:141] libmachine: (multinode-362000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0728 18:39:30.021767    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:39:30.021792    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:39:30.021805    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:39:30.021812    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:39:30.021821    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:39:30.021828    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:39:30.021835    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:39:30.021842    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:39:30.021846    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:39:30.021858    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:39:30.021870    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:39:30.184416    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:30 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0728 18:39:30.184441    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:30 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0728 18:39:30.184449    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:30 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0728 18:39:30.208412    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:30 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0728 18:39:32.022767    4457 main.go:141] libmachine: (multinode-362000) DBG | Attempt 4
	I0728 18:39:32.022787    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:32.022839    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:39:32.023669    4457 main.go:141] libmachine: (multinode-362000) DBG | Searching for e:8c:86:9:55:cf in /var/db/dhcpd_leases ...
	I0728 18:39:32.023713    4457 main.go:141] libmachine: (multinode-362000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0728 18:39:32.023724    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:39:32.023755    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:39:32.023766    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:39:32.023784    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:39:32.023792    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:39:32.023799    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:39:32.023807    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:39:32.023813    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:39:32.023819    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:39:32.023826    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:39:32.023833    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:39:34.026024    4457 main.go:141] libmachine: (multinode-362000) DBG | Attempt 5
	I0728 18:39:34.026055    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:34.026205    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:39:34.027793    4457 main.go:141] libmachine: (multinode-362000) DBG | Searching for e:8c:86:9:55:cf in /var/db/dhcpd_leases ...
	I0728 18:39:34.027829    4457 main.go:141] libmachine: (multinode-362000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0728 18:39:34.027848    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:39:34.027863    4457 main.go:141] libmachine: (multinode-362000) DBG | Found match: e:8c:86:9:55:cf
	I0728 18:39:34.027872    4457 main.go:141] libmachine: (multinode-362000) DBG | IP: 192.169.0.13
	I0728 18:39:34.027956    4457 main.go:141] libmachine: (multinode-362000) Calling .GetConfigRaw
	I0728 18:39:34.028709    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:34.028859    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:34.028998    4457 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0728 18:39:34.029008    4457 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:39:34.029129    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:34.029211    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:39:34.030175    4457 main.go:141] libmachine: Detecting operating system of created instance...
	I0728 18:39:34.030188    4457 main.go:141] libmachine: Waiting for SSH to be available...
	I0728 18:39:34.030194    4457 main.go:141] libmachine: Getting to WaitForSSH function...
	I0728 18:39:34.030199    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.030290    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:34.030399    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.030492    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.030582    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:34.030718    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:39:34.030906    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:39:34.030922    4457 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0728 18:39:34.086759    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:39:34.086771    4457 main.go:141] libmachine: Detecting the provisioner...
	I0728 18:39:34.086784    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.086905    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:34.087015    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.087110    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.087189    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:34.087310    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:39:34.087445    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:39:34.087453    4457 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0728 18:39:34.135876    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0728 18:39:34.135929    4457 main.go:141] libmachine: found compatible host: buildroot
	I0728 18:39:34.135936    4457 main.go:141] libmachine: Provisioning with buildroot...
	I0728 18:39:34.135942    4457 main.go:141] libmachine: (multinode-362000) Calling .GetMachineName
	I0728 18:39:34.136085    4457 buildroot.go:166] provisioning hostname "multinode-362000"
	I0728 18:39:34.136096    4457 main.go:141] libmachine: (multinode-362000) Calling .GetMachineName
	I0728 18:39:34.136235    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.136338    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:34.136429    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.136531    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.136616    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:34.136734    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:39:34.136915    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:39:34.136923    4457 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-362000 && echo "multinode-362000" | sudo tee /etc/hostname
	I0728 18:39:34.195664    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-362000
	
	I0728 18:39:34.195682    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.195810    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:34.195923    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.196019    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.196115    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:34.196261    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:39:34.196405    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:39:34.196416    4457 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-362000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-362000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-362000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 18:39:34.251934    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:39:34.251961    4457 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 18:39:34.251977    4457 buildroot.go:174] setting up certificates
	I0728 18:39:34.251989    4457 provision.go:84] configureAuth start
	I0728 18:39:34.251997    4457 main.go:141] libmachine: (multinode-362000) Calling .GetMachineName
	I0728 18:39:34.252120    4457 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:39:34.252242    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.252325    4457 provision.go:143] copyHostCerts
	I0728 18:39:34.252364    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:39:34.252423    4457 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 18:39:34.252432    4457 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:39:34.252580    4457 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 18:39:34.252812    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:39:34.252842    4457 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 18:39:34.252846    4457 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:39:34.252979    4457 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 18:39:34.253126    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:39:34.253166    4457 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 18:39:34.253171    4457 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:39:34.253262    4457 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 18:39:34.253416    4457 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.multinode-362000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-362000]
	I0728 18:39:34.351530    4457 provision.go:177] copyRemoteCerts
	I0728 18:39:34.351585    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 18:39:34.351602    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.351753    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:34.351854    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.351936    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:34.352010    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:39:34.383245    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 18:39:34.383314    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 18:39:34.402954    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 18:39:34.403011    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0728 18:39:34.421736    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 18:39:34.421795    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 18:39:34.441501    4457 provision.go:87] duration metric: took 189.502411ms to configureAuth
	I0728 18:39:34.441513    4457 buildroot.go:189] setting minikube options for container-runtime
	I0728 18:39:34.441648    4457 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:39:34.441661    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:34.441790    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.441876    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:34.441969    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.442056    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.442145    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:34.442274    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:39:34.442392    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:39:34.442404    4457 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 18:39:34.493819    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 18:39:34.493833    4457 buildroot.go:70] root file system type: tmpfs
	I0728 18:39:34.493900    4457 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 18:39:34.493913    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.494071    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:34.494176    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.494279    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.494372    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:34.494513    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:39:34.494655    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:39:34.494702    4457 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 18:39:34.554254    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 18:39:34.554276    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.554416    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:34.554498    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.554612    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.554707    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:34.554839    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:39:34.554983    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:39:34.554996    4457 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 18:39:36.092020    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0728 18:39:36.092036    4457 main.go:141] libmachine: Checking connection to Docker...
	I0728 18:39:36.092043    4457 main.go:141] libmachine: (multinode-362000) Calling .GetURL
	I0728 18:39:36.092183    4457 main.go:141] libmachine: Docker is up and running!
	I0728 18:39:36.092191    4457 main.go:141] libmachine: Reticulating splines...
	I0728 18:39:36.092202    4457 client.go:171] duration metric: took 12.971373461s to LocalClient.Create
	I0728 18:39:36.092222    4457 start.go:167] duration metric: took 12.971429469s to libmachine.API.Create "multinode-362000"
	I0728 18:39:36.092231    4457 start.go:293] postStartSetup for "multinode-362000" (driver="hyperkit")
	I0728 18:39:36.092238    4457 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 18:39:36.092255    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:36.092402    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 18:39:36.092414    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:36.092500    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:36.092597    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:36.092700    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:36.092804    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:39:36.128151    4457 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 18:39:36.138951    4457 command_runner.go:130] > NAME=Buildroot
	I0728 18:39:36.138964    4457 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0728 18:39:36.138977    4457 command_runner.go:130] > ID=buildroot
	I0728 18:39:36.138981    4457 command_runner.go:130] > VERSION_ID=2023.02.9
	I0728 18:39:36.138986    4457 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0728 18:39:36.139066    4457 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 18:39:36.139079    4457 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 18:39:36.139192    4457 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 18:39:36.139381    4457 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 18:39:36.139387    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /etc/ssl/certs/15332.pem
	I0728 18:39:36.139596    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 18:39:36.150034    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:39:36.181496    4457 start.go:296] duration metric: took 89.257928ms for postStartSetup
	I0728 18:39:36.181528    4457 main.go:141] libmachine: (multinode-362000) Calling .GetConfigRaw
	I0728 18:39:36.182156    4457 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:39:36.182315    4457 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:39:36.182682    4457 start.go:128] duration metric: took 13.115685704s to createHost
	I0728 18:39:36.182696    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:36.182783    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:36.182873    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:36.182964    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:36.183052    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:36.183169    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:39:36.183299    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:39:36.183310    4457 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0728 18:39:36.233941    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722217176.431962180
	
	I0728 18:39:36.233954    4457 fix.go:216] guest clock: 1722217176.431962180
	I0728 18:39:36.233959    4457 fix.go:229] Guest: 2024-07-28 18:39:36.43196218 -0700 PDT Remote: 2024-07-28 18:39:36.18269 -0700 PDT m=+13.540401962 (delta=249.27218ms)
	I0728 18:39:36.233976    4457 fix.go:200] guest clock delta is within tolerance: 249.27218ms
	I0728 18:39:36.233981    4457 start.go:83] releasing machines lock for "multinode-362000", held for 13.167128835s
	I0728 18:39:36.233999    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:36.234157    4457 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:39:36.234246    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:36.234536    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:36.234638    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:36.234704    4457 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 18:39:36.234729    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:36.234795    4457 ssh_runner.go:195] Run: cat /version.json
	I0728 18:39:36.234808    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:36.234813    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:36.234911    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:36.234922    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:36.235003    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:36.235023    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:36.235108    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:36.235124    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:39:36.235177    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:39:36.264413    4457 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0728 18:39:36.264685    4457 ssh_runner.go:195] Run: systemctl --version
	I0728 18:39:36.316672    4457 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0728 18:39:36.316742    4457 command_runner.go:130] > systemd 252 (252)
	I0728 18:39:36.316767    4457 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0728 18:39:36.316889    4457 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0728 18:39:36.321953    4457 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0728 18:39:36.321971    4457 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 18:39:36.322010    4457 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 18:39:36.334160    4457 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0728 18:39:36.334256    4457 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 18:39:36.334266    4457 start.go:495] detecting cgroup driver to use...
	I0728 18:39:36.334357    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:39:36.348900    4457 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0728 18:39:36.349190    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 18:39:36.357441    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 18:39:36.365579    4457 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 18:39:36.365615    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 18:39:36.374041    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:39:36.382588    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 18:39:36.390648    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:39:36.398803    4457 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 18:39:36.407245    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 18:39:36.415394    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 18:39:36.423686    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 18:39:36.431845    4457 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 18:39:36.439196    4457 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0728 18:39:36.439273    4457 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 18:39:36.446659    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:39:36.545774    4457 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 18:39:36.564725    4457 start.go:495] detecting cgroup driver to use...
	I0728 18:39:36.564801    4457 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 18:39:36.579874    4457 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0728 18:39:36.579930    4457 command_runner.go:130] > [Unit]
	I0728 18:39:36.579938    4457 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 18:39:36.579957    4457 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 18:39:36.579965    4457 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0728 18:39:36.579969    4457 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0728 18:39:36.579973    4457 command_runner.go:130] > StartLimitBurst=3
	I0728 18:39:36.579977    4457 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 18:39:36.579980    4457 command_runner.go:130] > [Service]
	I0728 18:39:36.579984    4457 command_runner.go:130] > Type=notify
	I0728 18:39:36.579987    4457 command_runner.go:130] > Restart=on-failure
	I0728 18:39:36.579994    4457 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 18:39:36.580003    4457 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 18:39:36.580010    4457 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 18:39:36.580018    4457 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 18:39:36.580025    4457 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 18:39:36.580030    4457 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 18:39:36.580049    4457 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 18:39:36.580060    4457 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 18:39:36.580066    4457 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 18:39:36.580070    4457 command_runner.go:130] > ExecStart=
	I0728 18:39:36.580083    4457 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0728 18:39:36.580089    4457 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 18:39:36.580095    4457 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 18:39:36.580100    4457 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 18:39:36.580104    4457 command_runner.go:130] > LimitNOFILE=infinity
	I0728 18:39:36.580108    4457 command_runner.go:130] > LimitNPROC=infinity
	I0728 18:39:36.580111    4457 command_runner.go:130] > LimitCORE=infinity
	I0728 18:39:36.580115    4457 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 18:39:36.580125    4457 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 18:39:36.580129    4457 command_runner.go:130] > TasksMax=infinity
	I0728 18:39:36.580132    4457 command_runner.go:130] > TimeoutStartSec=0
	I0728 18:39:36.580138    4457 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 18:39:36.580141    4457 command_runner.go:130] > Delegate=yes
	I0728 18:39:36.580146    4457 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 18:39:36.580150    4457 command_runner.go:130] > KillMode=process
	I0728 18:39:36.580153    4457 command_runner.go:130] > [Install]
	I0728 18:39:36.580162    4457 command_runner.go:130] > WantedBy=multi-user.target
	I0728 18:39:36.580233    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:39:36.595157    4457 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 18:39:36.607711    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:39:36.621293    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:39:36.636257    4457 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 18:39:36.654754    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:39:36.665107    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:39:36.679672    4457 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 18:39:36.679980    4457 ssh_runner.go:195] Run: which cri-dockerd
	I0728 18:39:36.682999    4457 command_runner.go:130] > /usr/bin/cri-dockerd
	I0728 18:39:36.683073    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 18:39:36.690292    4457 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 18:39:36.703631    4457 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 18:39:36.798645    4457 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 18:39:36.913608    4457 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 18:39:36.913683    4457 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 18:39:36.928766    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:39:37.023923    4457 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:39:39.303257    4457 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.279358253s)
	I0728 18:39:39.303313    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0728 18:39:39.313662    4457 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0728 18:39:39.326612    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:39:39.337715    4457 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0728 18:39:39.430884    4457 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 18:39:39.532367    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:39:39.628365    4457 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0728 18:39:39.643204    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:39:39.654329    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:39:39.763825    4457 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0728 18:39:39.826299    4457 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 18:39:39.826376    4457 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 18:39:39.830952    4457 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0728 18:39:39.830966    4457 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0728 18:39:39.830971    4457 command_runner.go:130] > Device: 0,22	Inode: 799         Links: 1
	I0728 18:39:39.830976    4457 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0728 18:39:39.830980    4457 command_runner.go:130] > Access: 2024-07-29 01:39:39.975016608 +0000
	I0728 18:39:39.830985    4457 command_runner.go:130] > Modify: 2024-07-29 01:39:39.975016608 +0000
	I0728 18:39:39.830991    4457 command_runner.go:130] > Change: 2024-07-29 01:39:39.977016459 +0000
	I0728 18:39:39.830996    4457 command_runner.go:130] >  Birth: -
	I0728 18:39:39.831121    4457 start.go:563] Will wait 60s for crictl version
	I0728 18:39:39.831186    4457 ssh_runner.go:195] Run: which crictl
	I0728 18:39:39.833906    4457 command_runner.go:130] > /usr/bin/crictl
	I0728 18:39:39.834125    4457 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0728 18:39:39.868813    4457 command_runner.go:130] > Version:  0.1.0
	I0728 18:39:39.868827    4457 command_runner.go:130] > RuntimeName:  docker
	I0728 18:39:39.868831    4457 command_runner.go:130] > RuntimeVersion:  27.1.0
	I0728 18:39:39.868835    4457 command_runner.go:130] > RuntimeApiVersion:  v1
	I0728 18:39:39.870030    4457 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0728 18:39:39.870104    4457 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:39:39.888179    4457 command_runner.go:130] > 27.1.0
	I0728 18:39:39.888967    4457 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:39:39.904866    4457 command_runner.go:130] > 27.1.0
	I0728 18:39:39.954951    4457 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0728 18:39:39.954998    4457 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:39:39.955386    4457 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0728 18:39:39.960117    4457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:39:39.970920    4457 kubeadm.go:883] updating cluster {Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0728 18:39:39.970989    4457 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:39:39.971053    4457 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 18:39:39.982664    4457 docker.go:685] Got preloaded images: 
	I0728 18:39:39.982687    4457 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0728 18:39:39.982735    4457 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0728 18:39:39.990905    4457 command_runner.go:139] > {"Repositories":{}}
	I0728 18:39:39.991063    4457 ssh_runner.go:195] Run: which lz4
	I0728 18:39:39.993878    4457 command_runner.go:130] > /usr/bin/lz4
	I0728 18:39:39.994003    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0728 18:39:39.994127    4457 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0728 18:39:39.997154    4457 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0728 18:39:39.997231    4457 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0728 18:39:39.997247    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0728 18:39:40.955181    4457 docker.go:649] duration metric: took 961.12418ms to copy over tarball
	I0728 18:39:40.955248    4457 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0728 18:39:43.311904    4457 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.356684706s)
	I0728 18:39:43.311919    4457 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0728 18:39:43.338182    4457 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0728 18:39:43.345890    4457 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0728 18:39:43.345970    4457 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0728 18:39:43.359802    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:39:43.464657    4457 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:39:45.815797    4457 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.351164626s)
	I0728 18:39:45.815906    4457 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 18:39:45.828514    4457 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0728 18:39:45.828528    4457 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0728 18:39:45.828533    4457 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0728 18:39:45.828545    4457 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0728 18:39:45.828549    4457 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0728 18:39:45.828553    4457 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0728 18:39:45.828557    4457 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0728 18:39:45.828561    4457 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:39:45.829169    4457 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0728 18:39:45.829186    4457 cache_images.go:84] Images are preloaded, skipping loading
	I0728 18:39:45.829208    4457 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0728 18:39:45.829285    4457 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-362000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0728 18:39:45.829361    4457 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 18:39:45.865868    4457 command_runner.go:130] > cgroupfs
	I0728 18:39:45.866530    4457 cni.go:84] Creating CNI manager for ""
	I0728 18:39:45.866540    4457 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0728 18:39:45.866550    4457 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0728 18:39:45.866567    4457 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-362000 NodeName:multinode-362000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0728 18:39:45.866659    4457 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-362000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 18:39:45.866716    4457 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0728 18:39:45.874335    4457 command_runner.go:130] > kubeadm
	I0728 18:39:45.874343    4457 command_runner.go:130] > kubectl
	I0728 18:39:45.874346    4457 command_runner.go:130] > kubelet
	I0728 18:39:45.874413    4457 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 18:39:45.874458    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 18:39:45.881783    4457 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0728 18:39:45.895323    4457 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 18:39:45.909095    4457 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0728 18:39:45.922717    4457 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0728 18:39:45.925684    4457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:39:45.935231    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:39:46.028862    4457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:39:46.043464    4457 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000 for IP: 192.169.0.13
	I0728 18:39:46.043477    4457 certs.go:194] generating shared ca certs ...
	I0728 18:39:46.043486    4457 certs.go:226] acquiring lock for ca certs: {Name:mk64aac07da96a39ae6165406ad142fbce2d0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:39:46.043672    4457 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key
	I0728 18:39:46.043747    4457 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key
	I0728 18:39:46.043758    4457 certs.go:256] generating profile certs ...
	I0728 18:39:46.043800    4457 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key
	I0728 18:39:46.043812    4457 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt with IP's: []
	I0728 18:39:46.478407    4457 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt ...
	I0728 18:39:46.478427    4457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt: {Name:mka2aac26f6bb35ea3d4721520c4f39c62d89174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:39:46.478776    4457 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key ...
	I0728 18:39:46.478784    4457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key: {Name:mk7c0f81fa266c66b46f4b0af80e0b57928387bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:39:46.479030    4457 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key.cf2f2b57
	I0728 18:39:46.479046    4457 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt.cf2f2b57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0728 18:39:46.651341    4457 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt.cf2f2b57 ...
	I0728 18:39:46.651356    4457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt.cf2f2b57: {Name:mk093692e36abd7a7afccd1c946f90bc40aad12d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:39:46.651665    4457 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key.cf2f2b57 ...
	I0728 18:39:46.651674    4457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key.cf2f2b57: {Name:mkc9cd932269a62b355966e5b683dd182c98ca39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:39:46.651895    4457 certs.go:381] copying /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt.cf2f2b57 -> /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt
	I0728 18:39:46.652085    4457 certs.go:385] copying /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key.cf2f2b57 -> /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key
	I0728 18:39:46.652272    4457 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.key
	I0728 18:39:46.652288    4457 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.crt with IP's: []
	I0728 18:39:46.815503    4457 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.crt ...
	I0728 18:39:46.815517    4457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.crt: {Name:mkf99da5cbf1447710168bfc4b4f7f7f9d4a5014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:39:46.815842    4457 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.key ...
	I0728 18:39:46.815852    4457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.key: {Name:mk4611170239081f2e211d7d80246aa607ebb9f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:39:46.816095    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0728 18:39:46.816126    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0728 18:39:46.816147    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0728 18:39:46.816169    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0728 18:39:46.816189    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0728 18:39:46.816210    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0728 18:39:46.816249    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0728 18:39:46.816268    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0728 18:39:46.816368    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem (1338 bytes)
	W0728 18:39:46.816423    4457 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533_empty.pem, impossibly tiny 0 bytes
	I0728 18:39:46.816432    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem (1675 bytes)
	I0728 18:39:46.816465    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem (1078 bytes)
	I0728 18:39:46.816496    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem (1123 bytes)
	I0728 18:39:46.816525    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem (1679 bytes)
	I0728 18:39:46.816592    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:39:46.816639    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:39:46.816663    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem -> /usr/share/ca-certificates/1533.pem
	I0728 18:39:46.816682    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /usr/share/ca-certificates/15332.pem
	I0728 18:39:46.817131    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 18:39:46.847289    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 18:39:46.869482    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 18:39:46.891807    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 18:39:46.911672    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0728 18:39:46.931550    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 18:39:46.952093    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 18:39:46.972162    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0728 18:39:46.992027    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 18:39:47.011482    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem --> /usr/share/ca-certificates/1533.pem (1338 bytes)
	I0728 18:39:47.031028    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /usr/share/ca-certificates/15332.pem (1708 bytes)
	I0728 18:39:47.051105    4457 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 18:39:47.065442    4457 ssh_runner.go:195] Run: openssl version
	I0728 18:39:47.069800    4457 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0728 18:39:47.069946    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 18:39:47.078286    4457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:39:47.081634    4457 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:39:47.081777    4457 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:39:47.081812    4457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:39:47.085826    4457 command_runner.go:130] > b5213941
	I0728 18:39:47.086002    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 18:39:47.094361    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1533.pem && ln -fs /usr/share/ca-certificates/1533.pem /etc/ssl/certs/1533.pem"
	I0728 18:39:47.103154    4457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1533.pem
	I0728 18:39:47.106672    4457 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 00:57 /usr/share/ca-certificates/1533.pem
	I0728 18:39:47.106692    4457 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:57 /usr/share/ca-certificates/1533.pem
	I0728 18:39:47.106727    4457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1533.pem
	I0728 18:39:47.110935    4457 command_runner.go:130] > 51391683
	I0728 18:39:47.111129    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1533.pem /etc/ssl/certs/51391683.0"
	I0728 18:39:47.119568    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15332.pem && ln -fs /usr/share/ca-certificates/15332.pem /etc/ssl/certs/15332.pem"
	I0728 18:39:47.128138    4457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15332.pem
	I0728 18:39:47.131687    4457 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 00:57 /usr/share/ca-certificates/15332.pem
	I0728 18:39:47.131763    4457 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:57 /usr/share/ca-certificates/15332.pem
	I0728 18:39:47.131796    4457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15332.pem
	I0728 18:39:47.136089    4457 command_runner.go:130] > 3ec20f2e
	I0728 18:39:47.136126    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15332.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 18:39:47.144736    4457 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0728 18:39:47.148009    4457 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0728 18:39:47.148026    4457 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0728 18:39:47.148069    4457 kubeadm.go:392] StartCluster: {Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:39:47.148163    4457 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 18:39:47.160119    4457 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 18:39:47.167833    4457 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0728 18:39:47.167854    4457 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0728 18:39:47.167859    4457 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0728 18:39:47.167913    4457 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 18:39:47.175408    4457 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 18:39:47.183077    4457 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0728 18:39:47.183091    4457 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0728 18:39:47.183097    4457 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0728 18:39:47.183106    4457 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 18:39:47.183125    4457 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 18:39:47.183131    4457 kubeadm.go:157] found existing configuration files:
	
	I0728 18:39:47.183169    4457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 18:39:47.190436    4457 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0728 18:39:47.190454    4457 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0728 18:39:47.190491    4457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0728 18:39:47.198053    4457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 18:39:47.205431    4457 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0728 18:39:47.205448    4457 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0728 18:39:47.205483    4457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0728 18:39:47.213148    4457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 18:39:47.220352    4457 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0728 18:39:47.220374    4457 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0728 18:39:47.220409    4457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 18:39:47.227976    4457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 18:39:47.235196    4457 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0728 18:39:47.235212    4457 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0728 18:39:47.235245    4457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 18:39:47.242661    4457 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0728 18:39:47.301790    4457 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0728 18:39:47.301802    4457 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0728 18:39:47.301844    4457 kubeadm.go:310] [preflight] Running pre-flight checks
	I0728 18:39:47.301852    4457 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 18:39:47.387758    4457 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0728 18:39:47.387768    4457 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0728 18:39:47.387870    4457 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0728 18:39:47.387880    4457 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0728 18:39:47.387956    4457 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0728 18:39:47.387956    4457 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0728 18:39:47.560153    4457 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0728 18:39:47.560166    4457 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0728 18:39:47.583494    4457 out.go:204]   - Generating certificates and keys ...
	I0728 18:39:47.583553    4457 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0728 18:39:47.583560    4457 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0728 18:39:47.583614    4457 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0728 18:39:47.583620    4457 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0728 18:39:47.767383    4457 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0728 18:39:47.767390    4457 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0728 18:39:47.902927    4457 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0728 18:39:47.902943    4457 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0728 18:39:48.029398    4457 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0728 18:39:48.029416    4457 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0728 18:39:48.230360    4457 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0728 18:39:48.230376    4457 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0728 18:39:48.466250    4457 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0728 18:39:48.466267    4457 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0728 18:39:48.466383    4457 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-362000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0728 18:39:48.466393    4457 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-362000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0728 18:39:48.653665    4457 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0728 18:39:48.653680    4457 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0728 18:39:48.653781    4457 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-362000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0728 18:39:48.653793    4457 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-362000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0728 18:39:48.906060    4457 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0728 18:39:48.906072    4457 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0728 18:39:49.017102    4457 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0728 18:39:49.017115    4457 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0728 18:39:49.099226    4457 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0728 18:39:49.099241    4457 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0728 18:39:49.099370    4457 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0728 18:39:49.099380    4457 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0728 18:39:49.290179    4457 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0728 18:39:49.290193    4457 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0728 18:39:49.662361    4457 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0728 18:39:49.662379    4457 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0728 18:39:49.814296    4457 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0728 18:39:49.814311    4457 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0728 18:39:49.936514    4457 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0728 18:39:49.936530    4457 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0728 18:39:50.223908    4457 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0728 18:39:50.223913    4457 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0728 18:39:50.224262    4457 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0728 18:39:50.224272    4457 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0728 18:39:50.225979    4457 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0728 18:39:50.225995    4457 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0728 18:39:50.247458    4457 out.go:204]   - Booting up control plane ...
	I0728 18:39:50.247537    4457 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0728 18:39:50.247541    4457 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0728 18:39:50.247615    4457 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0728 18:39:50.247622    4457 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0728 18:39:50.247680    4457 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0728 18:39:50.247687    4457 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0728 18:39:50.248416    4457 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0728 18:39:50.248423    4457 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0728 18:39:50.248656    4457 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0728 18:39:50.248662    4457 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0728 18:39:50.248709    4457 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0728 18:39:50.248719    4457 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0728 18:39:50.354370    4457 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0728 18:39:50.354373    4457 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0728 18:39:50.354452    4457 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0728 18:39:50.354459    4457 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0728 18:39:50.862036    4457 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 507.998553ms
	I0728 18:39:50.862052    4457 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 507.998553ms
	I0728 18:39:50.862118    4457 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0728 18:39:50.862132    4457 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0728 18:39:55.360922    4457 kubeadm.go:310] [api-check] The API server is healthy after 4.5017507s
	I0728 18:39:55.360932    4457 command_runner.go:130] > [api-check] The API server is healthy after 4.5017507s
	I0728 18:39:55.372416    4457 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0728 18:39:55.372424    4457 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0728 18:39:55.379262    4457 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0728 18:39:55.379271    4457 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0728 18:39:55.393857    4457 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0728 18:39:55.393872    4457 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0728 18:39:55.394021    4457 kubeadm.go:310] [mark-control-plane] Marking the node multinode-362000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0728 18:39:55.394030    4457 command_runner.go:130] > [mark-control-plane] Marking the node multinode-362000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0728 18:39:55.402932    4457 kubeadm.go:310] [bootstrap-token] Using token: 53nsa7.gvs19q17kvpjmfej
	I0728 18:39:55.402953    4457 command_runner.go:130] > [bootstrap-token] Using token: 53nsa7.gvs19q17kvpjmfej
	I0728 18:39:55.430849    4457 out.go:204]   - Configuring RBAC rules ...
	I0728 18:39:55.431017    4457 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0728 18:39:55.431022    4457 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0728 18:39:55.473819    4457 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0728 18:39:55.473825    4457 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0728 18:39:55.478550    4457 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0728 18:39:55.478567    4457 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0728 18:39:55.480819    4457 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0728 18:39:55.480828    4457 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0728 18:39:55.482589    4457 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0728 18:39:55.482602    4457 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0728 18:39:55.484467    4457 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0728 18:39:55.484467    4457 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0728 18:39:55.769440    4457 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0728 18:39:55.769445    4457 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0728 18:39:56.177834    4457 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0728 18:39:56.177851    4457 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0728 18:39:56.764455    4457 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0728 18:39:56.764469    4457 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0728 18:39:56.765295    4457 kubeadm.go:310] 
	I0728 18:39:56.765354    4457 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0728 18:39:56.765371    4457 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0728 18:39:56.765383    4457 kubeadm.go:310] 
	I0728 18:39:56.765450    4457 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0728 18:39:56.765459    4457 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0728 18:39:56.765463    4457 kubeadm.go:310] 
	I0728 18:39:56.765508    4457 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0728 18:39:56.765516    4457 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0728 18:39:56.765571    4457 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0728 18:39:56.765579    4457 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0728 18:39:56.765626    4457 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0728 18:39:56.765634    4457 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0728 18:39:56.765642    4457 kubeadm.go:310] 
	I0728 18:39:56.765680    4457 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0728 18:39:56.765685    4457 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0728 18:39:56.765697    4457 kubeadm.go:310] 
	I0728 18:39:56.765732    4457 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0728 18:39:56.765737    4457 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0728 18:39:56.765740    4457 kubeadm.go:310] 
	I0728 18:39:56.765774    4457 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0728 18:39:56.765778    4457 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0728 18:39:56.765828    4457 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0728 18:39:56.765832    4457 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0728 18:39:56.765877    4457 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0728 18:39:56.765881    4457 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0728 18:39:56.765884    4457 kubeadm.go:310] 
	I0728 18:39:56.765956    4457 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0728 18:39:56.765967    4457 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0728 18:39:56.766038    4457 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0728 18:39:56.766039    4457 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0728 18:39:56.766048    4457 kubeadm.go:310] 
	I0728 18:39:56.766112    4457 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 53nsa7.gvs19q17kvpjmfej \
	I0728 18:39:56.766118    4457 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 53nsa7.gvs19q17kvpjmfej \
	I0728 18:39:56.766206    4457 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 \
	I0728 18:39:56.766213    4457 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 \
	I0728 18:39:56.766235    4457 command_runner.go:130] > 	--control-plane 
	I0728 18:39:56.766241    4457 kubeadm.go:310] 	--control-plane 
	I0728 18:39:56.766249    4457 kubeadm.go:310] 
	I0728 18:39:56.766316    4457 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0728 18:39:56.766320    4457 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0728 18:39:56.766327    4457 kubeadm.go:310] 
	I0728 18:39:56.766390    4457 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 53nsa7.gvs19q17kvpjmfej \
	I0728 18:39:56.766397    4457 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 53nsa7.gvs19q17kvpjmfej \
	I0728 18:39:56.766481    4457 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 
	I0728 18:39:56.766486    4457 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 
	I0728 18:39:56.767454    4457 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0728 18:39:56.767464    4457 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0728 18:39:56.767521    4457 cni.go:84] Creating CNI manager for ""
	I0728 18:39:56.767527    4457 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0728 18:39:56.794259    4457 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0728 18:39:56.852216    4457 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0728 18:39:56.857699    4457 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0728 18:39:56.857712    4457 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0728 18:39:56.857719    4457 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0728 18:39:56.857724    4457 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0728 18:39:56.857728    4457 command_runner.go:130] > Access: 2024-07-29 01:39:33.652253582 +0000
	I0728 18:39:56.857739    4457 command_runner.go:130] > Modify: 2024-07-23 05:15:32.000000000 +0000
	I0728 18:39:56.857744    4457 command_runner.go:130] > Change: 2024-07-29 01:39:32.205688945 +0000
	I0728 18:39:56.857755    4457 command_runner.go:130] >  Birth: -
	I0728 18:39:56.857898    4457 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0728 18:39:56.857905    4457 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0728 18:39:56.872532    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0728 18:39:57.067489    4457 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0728 18:39:57.071888    4457 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0728 18:39:57.076162    4457 command_runner.go:130] > serviceaccount/kindnet created
	I0728 18:39:57.081626    4457 command_runner.go:130] > daemonset.apps/kindnet created
	I0728 18:39:57.082864    4457 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 18:39:57.082924    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:39:57.082939    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-362000 minikube.k8s.io/updated_at=2024_07_28T18_39_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=multinode-362000 minikube.k8s.io/primary=true
	I0728 18:39:57.141282    4457 command_runner.go:130] > -16
	I0728 18:39:57.141501    4457 ops.go:34] apiserver oom_adj: -16
	I0728 18:39:57.223732    4457 command_runner.go:130] > node/multinode-362000 labeled
	I0728 18:39:57.224678    4457 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0728 18:39:57.224780    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:39:57.286683    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:39:57.727000    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:39:57.789564    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:39:58.224922    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:39:58.288791    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:39:58.726483    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:39:58.786616    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:39:59.224879    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:39:59.283451    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:39:59.725546    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:39:59.783054    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:00.226641    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:00.289133    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:00.724775    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:00.788264    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:01.224763    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:01.289858    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:01.726327    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:01.784894    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:02.224927    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:02.285820    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:02.725994    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:02.785885    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:03.225965    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:03.284414    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:03.724778    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:03.785882    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:04.225389    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:04.285366    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:04.725557    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:04.786398    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:05.226293    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:05.293946    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:05.725634    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:05.785973    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:06.226151    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:06.293162    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:06.725726    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:06.794820    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:07.225166    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:07.286052    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:07.726737    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:07.784693    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:08.224870    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:08.285867    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:08.726546    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:08.785482    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:09.225131    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:09.285788    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:09.725623    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:09.788869    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:10.225058    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:10.283013    4457 command_runner.go:130] > NAME      SECRETS   AGE
	I0728 18:40:10.283026    4457 command_runner.go:130] > default   0         0s
	I0728 18:40:10.284131    4457 kubeadm.go:1113] duration metric: took 13.201513533s to wait for elevateKubeSystemPrivileges
	I0728 18:40:10.284150    4457 kubeadm.go:394] duration metric: took 23.136542682s to StartCluster
	I0728 18:40:10.284174    4457 settings.go:142] acquiring lock: {Name:mk9218fe520c81adf28e6207ae402102e10a5d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:40:10.284272    4457 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:40:10.284780    4457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/kubeconfig: {Name:mk76ac5b4283108fca1a66cc5cd0791fbea0691d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:40:10.285026    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 18:40:10.285038    4457 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:40:10.285076    4457 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0728 18:40:10.285121    4457 addons.go:69] Setting storage-provisioner=true in profile "multinode-362000"
	I0728 18:40:10.285133    4457 addons.go:69] Setting default-storageclass=true in profile "multinode-362000"
	I0728 18:40:10.285153    4457 addons.go:234] Setting addon storage-provisioner=true in "multinode-362000"
	I0728 18:40:10.285172    4457 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:40:10.309576    4457 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-362000"
	I0728 18:40:10.309786    4457 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:40:10.309950    4457 out.go:177] * Verifying Kubernetes components...
	I0728 18:40:10.310638    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:10.310672    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:10.310955    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:10.310988    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:10.320224    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52539
	I0728 18:40:10.320256    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52538
	I0728 18:40:10.320612    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:10.320645    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:10.320974    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:10.320984    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:10.320996    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:10.321036    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:10.321199    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:10.321249    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:10.321379    4457 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:40:10.321476    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:10.321562    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:40:10.321610    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:10.321636    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:10.323925    4457 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:40:10.324234    4457 kapi.go:59] client config for multinode-362000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6df5b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:40:10.324779    4457 cert_rotation.go:137] Starting client certificate rotation controller
	I0728 18:40:10.324964    4457 addons.go:234] Setting addon default-storageclass=true in "multinode-362000"
	I0728 18:40:10.324992    4457 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:40:10.325245    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:10.325277    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:10.330641    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52542
	I0728 18:40:10.331008    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:10.331360    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:10.331375    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:10.331589    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:10.331768    4457 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:40:10.332140    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:10.332178    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:40:10.333069    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:40:10.334227    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52544
	I0728 18:40:10.352904    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:40:10.353246    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:10.353698    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:10.353710    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:10.353959    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:10.354421    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:10.354445    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:10.363396    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52546
	I0728 18:40:10.363753    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:10.364112    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:10.364129    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:10.364390    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:10.364525    4457 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:40:10.364628    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:10.364708    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:40:10.374788    4457 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:40:10.375201    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:40:10.375451    4457 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 18:40:10.375461    4457 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 18:40:10.375477    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:40:10.375580    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:40:10.375677    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:40:10.375768    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:40:10.375851    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:40:10.395728    4457 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 18:40:10.395747    4457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 18:40:10.395792    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:40:10.395957    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:40:10.396043    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:40:10.396125    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:40:10.396226    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:40:10.401627    4457 command_runner.go:130] > apiVersion: v1
	I0728 18:40:10.401639    4457 command_runner.go:130] > data:
	I0728 18:40:10.401643    4457 command_runner.go:130] >   Corefile: |
	I0728 18:40:10.401645    4457 command_runner.go:130] >     .:53 {
	I0728 18:40:10.401649    4457 command_runner.go:130] >         errors
	I0728 18:40:10.401652    4457 command_runner.go:130] >         health {
	I0728 18:40:10.401658    4457 command_runner.go:130] >            lameduck 5s
	I0728 18:40:10.401662    4457 command_runner.go:130] >         }
	I0728 18:40:10.401666    4457 command_runner.go:130] >         ready
	I0728 18:40:10.401673    4457 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0728 18:40:10.401677    4457 command_runner.go:130] >            pods insecure
	I0728 18:40:10.401681    4457 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0728 18:40:10.401693    4457 command_runner.go:130] >            ttl 30
	I0728 18:40:10.401697    4457 command_runner.go:130] >         }
	I0728 18:40:10.401700    4457 command_runner.go:130] >         prometheus :9153
	I0728 18:40:10.401705    4457 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0728 18:40:10.401709    4457 command_runner.go:130] >            max_concurrent 1000
	I0728 18:40:10.401713    4457 command_runner.go:130] >         }
	I0728 18:40:10.401716    4457 command_runner.go:130] >         cache 30
	I0728 18:40:10.401720    4457 command_runner.go:130] >         loop
	I0728 18:40:10.401723    4457 command_runner.go:130] >         reload
	I0728 18:40:10.401727    4457 command_runner.go:130] >         loadbalance
	I0728 18:40:10.401730    4457 command_runner.go:130] >     }
	I0728 18:40:10.401733    4457 command_runner.go:130] > kind: ConfigMap
	I0728 18:40:10.401737    4457 command_runner.go:130] > metadata:
	I0728 18:40:10.401742    4457 command_runner.go:130] >   creationTimestamp: "2024-07-29T01:39:56Z"
	I0728 18:40:10.401746    4457 command_runner.go:130] >   name: coredns
	I0728 18:40:10.401750    4457 command_runner.go:130] >   namespace: kube-system
	I0728 18:40:10.401753    4457 command_runner.go:130] >   resourceVersion: "229"
	I0728 18:40:10.401757    4457 command_runner.go:130] >   uid: 090d6d0b-6aa9-498f-b5ff-18ee2e948131
	I0728 18:40:10.401847    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0728 18:40:10.517689    4457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:40:10.608660    4457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 18:40:10.613299    4457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 18:40:10.755970    4457 command_runner.go:130] > configmap/coredns replaced
	I0728 18:40:10.758280    4457 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0728 18:40:10.758560    4457 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:40:10.758560    4457 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:40:10.758749    4457 kapi.go:59] client config for multinode-362000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6df5b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:40:10.758752    4457 kapi.go:59] client config for multinode-362000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6df5b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:40:10.758941    4457 node_ready.go:35] waiting up to 6m0s for node "multinode-362000" to be "Ready" ...
	I0728 18:40:10.759003    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:10.759004    4457 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0728 18:40:10.759008    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:10.759011    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:10.759017    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:10.759017    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:10.759021    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:10.759023    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:10.766260    4457 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0728 18:40:10.766273    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:10.766278    4457 round_trippers.go:580]     Audit-Id: 18631da8-eb42-4cf5-8868-257785e0a022
	I0728 18:40:10.766282    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:10.766285    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:10.766288    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:10.766291    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:10.766294    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:10 GMT
	I0728 18:40:10.766588    4457 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0728 18:40:10.766597    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:10.766603    4457 round_trippers.go:580]     Audit-Id: 700a3297-1555-4166-b81d-840902aaebd8
	I0728 18:40:10.766609    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:10.766614    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:10.766617    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:10.766621    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:10.766624    4457 round_trippers.go:580]     Content-Length: 291
	I0728 18:40:10.766628    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:10 GMT
	I0728 18:40:10.766675    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:10.766696    4457 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cdd02524-af69-44e9-9e2c-bfbb6e7d13b2","resourceVersion":"355","creationTimestamp":"2024-07-29T01:39:56Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0728 18:40:10.767124    4457 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cdd02524-af69-44e9-9e2c-bfbb6e7d13b2","resourceVersion":"355","creationTimestamp":"2024-07-29T01:39:56Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0728 18:40:10.767162    4457 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0728 18:40:10.767169    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:10.767176    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:10.767180    4457 round_trippers.go:473]     Content-Type: application/json
	I0728 18:40:10.767185    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:10.772809    4457 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0728 18:40:10.772827    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:10.772832    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:10 GMT
	I0728 18:40:10.772835    4457 round_trippers.go:580]     Audit-Id: 8878de70-45f8-4839-b86f-f063423caff9
	I0728 18:40:10.772838    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:10.772841    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:10.772844    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:10.772848    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:10.772850    4457 round_trippers.go:580]     Content-Length: 291
	I0728 18:40:10.772862    4457 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cdd02524-af69-44e9-9e2c-bfbb6e7d13b2","resourceVersion":"358","creationTimestamp":"2024-07-29T01:39:56Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0728 18:40:11.062303    4457 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0728 18:40:11.062320    4457 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0728 18:40:11.062326    4457 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0728 18:40:11.062331    4457 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0728 18:40:11.062335    4457 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0728 18:40:11.062339    4457 command_runner.go:130] > pod/storage-provisioner created
	I0728 18:40:11.062368    4457 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0728 18:40:11.062376    4457 main.go:141] libmachine: Making call to close driver server
	I0728 18:40:11.062385    4457 main.go:141] libmachine: (multinode-362000) Calling .Close
	I0728 18:40:11.062402    4457 main.go:141] libmachine: Making call to close driver server
	I0728 18:40:11.062409    4457 main.go:141] libmachine: (multinode-362000) Calling .Close
	I0728 18:40:11.062555    4457 main.go:141] libmachine: Successfully made call to close driver server
	I0728 18:40:11.062570    4457 main.go:141] libmachine: Making call to close connection to plugin binary
	I0728 18:40:11.062576    4457 main.go:141] libmachine: Successfully made call to close driver server
	I0728 18:40:11.062583    4457 main.go:141] libmachine: Making call to close driver server
	I0728 18:40:11.062586    4457 main.go:141] libmachine: Making call to close connection to plugin binary
	I0728 18:40:11.062590    4457 main.go:141] libmachine: (multinode-362000) Calling .Close
	I0728 18:40:11.062595    4457 main.go:141] libmachine: Making call to close driver server
	I0728 18:40:11.062599    4457 main.go:141] libmachine: (multinode-362000) DBG | Closing plugin on server side
	I0728 18:40:11.062601    4457 main.go:141] libmachine: (multinode-362000) Calling .Close
	I0728 18:40:11.062784    4457 main.go:141] libmachine: (multinode-362000) DBG | Closing plugin on server side
	I0728 18:40:11.062785    4457 main.go:141] libmachine: Successfully made call to close driver server
	I0728 18:40:11.062797    4457 main.go:141] libmachine: Making call to close connection to plugin binary
	I0728 18:40:11.062797    4457 main.go:141] libmachine: Successfully made call to close driver server
	I0728 18:40:11.062809    4457 main.go:141] libmachine: Making call to close connection to plugin binary
	I0728 18:40:11.062886    4457 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0728 18:40:11.062894    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:11.062903    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:11.062915    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:11.067002    4457 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 18:40:11.067014    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:11.067020    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:11.067023    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:11.067026    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:11.067030    4457 round_trippers.go:580]     Content-Length: 1273
	I0728 18:40:11.067033    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:11 GMT
	I0728 18:40:11.067037    4457 round_trippers.go:580]     Audit-Id: 8a900ef1-fe6e-4dbd-a7ec-344182a6729a
	I0728 18:40:11.067040    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:11.067428    4457 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"379"},"items":[{"metadata":{"name":"standard","uid":"b2f47efd-8c58-4f8d-ad0f-27dfc164889d","resourceVersion":"369","creationTimestamp":"2024-07-29T01:40:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-29T01:40:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0728 18:40:11.067670    4457 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b2f47efd-8c58-4f8d-ad0f-27dfc164889d","resourceVersion":"369","creationTimestamp":"2024-07-29T01:40:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-29T01:40:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0728 18:40:11.067704    4457 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0728 18:40:11.067710    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:11.067716    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:11.067721    4457 round_trippers.go:473]     Content-Type: application/json
	I0728 18:40:11.067723    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:11.070206    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:11.070215    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:11.070220    4457 round_trippers.go:580]     Content-Length: 1220
	I0728 18:40:11.070223    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:11 GMT
	I0728 18:40:11.070229    4457 round_trippers.go:580]     Audit-Id: 0e3733b8-b6c8-4111-84ee-078978e48daa
	I0728 18:40:11.070231    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:11.070235    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:11.070237    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:11.070239    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:11.070265    4457 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b2f47efd-8c58-4f8d-ad0f-27dfc164889d","resourceVersion":"369","creationTimestamp":"2024-07-29T01:40:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-29T01:40:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0728 18:40:11.070362    4457 main.go:141] libmachine: Making call to close driver server
	I0728 18:40:11.070370    4457 main.go:141] libmachine: (multinode-362000) Calling .Close
	I0728 18:40:11.070525    4457 main.go:141] libmachine: Successfully made call to close driver server
	I0728 18:40:11.070536    4457 main.go:141] libmachine: Making call to close connection to plugin binary
	I0728 18:40:11.094564    4457 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0728 18:40:11.135503    4457 addons.go:510] duration metric: took 850.452019ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0728 18:40:11.260114    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:11.260130    4457 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0728 18:40:11.260136    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:11.260144    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:11.260153    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:11.260192    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:11.260157    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:11.260230    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:11.262846    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:11.262856    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:11.262861    4457 round_trippers.go:580]     Audit-Id: 67b8a37c-428f-4a9a-960c-05602441f47b
	I0728 18:40:11.262864    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:11.262874    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:11.262880    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:11.262883    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:11.262886    4457 round_trippers.go:580]     Content-Length: 291
	I0728 18:40:11.262889    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:11 GMT
	I0728 18:40:11.262902    4457 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cdd02524-af69-44e9-9e2c-bfbb6e7d13b2","resourceVersion":"368","creationTimestamp":"2024-07-29T01:39:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0728 18:40:11.262948    4457 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-362000" context rescaled to 1 replicas
	I0728 18:40:11.263058    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:11.263070    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:11.263076    4457 round_trippers.go:580]     Audit-Id: ec8bd958-b06b-4f2f-979b-59534f7f9af2
	I0728 18:40:11.263080    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:11.263083    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:11.263086    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:11.263090    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:11.263093    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:11 GMT
	I0728 18:40:11.263231    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:11.759227    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:11.759246    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:11.759255    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:11.759259    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:11.761392    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:11.761402    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:11.761407    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:11.761424    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:11.761436    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:11 GMT
	I0728 18:40:11.761451    4457 round_trippers.go:580]     Audit-Id: a4047d81-9009-45a6-8539-024944a72d9e
	I0728 18:40:11.761462    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:11.761467    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:11.761666    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:12.260536    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:12.260552    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:12.260564    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:12.260570    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:12.262176    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:12.262211    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:12.262222    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:12.262247    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:12.262254    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:12.262257    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:12.262261    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:12 GMT
	I0728 18:40:12.262263    4457 round_trippers.go:580]     Audit-Id: f204e21c-9b48-455d-8622-6066ec208239
	I0728 18:40:12.262371    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:12.759261    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:12.759274    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:12.759280    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:12.759283    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:12.760765    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:12.760774    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:12.760781    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:12.760786    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:12 GMT
	I0728 18:40:12.760790    4457 round_trippers.go:580]     Audit-Id: 992ec9f1-85c8-43b7-a888-469a1f8515b4
	I0728 18:40:12.760794    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:12.760797    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:12.760800    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:12.761071    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:12.761255    4457 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:40:13.261024    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:13.261039    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:13.261046    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:13.261049    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:13.262638    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:13.262651    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:13.262659    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:13.262668    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:13.262672    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:13.262676    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:13.262679    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:13 GMT
	I0728 18:40:13.262682    4457 round_trippers.go:580]     Audit-Id: 0fffd104-8bd3-4fe0-9b9a-1cf59e471957
	I0728 18:40:13.262886    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:13.759154    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:13.759167    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:13.759174    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:13.759177    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:13.760713    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:13.760722    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:13.760726    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:13 GMT
	I0728 18:40:13.760730    4457 round_trippers.go:580]     Audit-Id: a45b0e81-84fa-4753-9739-8d28ab5d0a14
	I0728 18:40:13.760735    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:13.760738    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:13.760741    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:13.760744    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:13.760819    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:14.260145    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:14.260163    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:14.260172    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:14.260177    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:14.263013    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:14.263024    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:14.263030    4457 round_trippers.go:580]     Audit-Id: fdaf6131-25d1-47c2-acd1-446b115ffb8e
	I0728 18:40:14.263035    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:14.263038    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:14.263041    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:14.263044    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:14.263052    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:14 GMT
	I0728 18:40:14.263231    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:14.760316    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:14.760340    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:14.760351    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:14.760357    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:14.762851    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:14.762866    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:14.762873    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:14.762879    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:14.762888    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:14.762894    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:14 GMT
	I0728 18:40:14.762900    4457 round_trippers.go:580]     Audit-Id: af5c0ab8-4e6d-4849-bceb-87f698162cc2
	I0728 18:40:14.762906    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:14.762991    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:14.763230    4457 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:40:15.260614    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:15.260643    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:15.260728    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:15.260736    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:15.263024    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:15.263038    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:15.263052    4457 round_trippers.go:580]     Audit-Id: 2b9d5d3c-3d13-49b3-a08e-4f2d9d3a85ce
	I0728 18:40:15.263057    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:15.263062    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:15.263065    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:15.263069    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:15.263072    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:15 GMT
	I0728 18:40:15.263180    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:15.760417    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:15.760434    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:15.760443    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:15.760448    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:15.762872    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:15.762881    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:15.762886    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:15.762888    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:15.762891    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:15.762893    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:15 GMT
	I0728 18:40:15.762896    4457 round_trippers.go:580]     Audit-Id: 42794fb7-77b1-4a0e-b567-be7a6bfd31d6
	I0728 18:40:15.762899    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:15.763152    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:16.259813    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:16.259848    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:16.259927    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:16.259936    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:16.262408    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:16.262422    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:16.262430    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:16 GMT
	I0728 18:40:16.262434    4457 round_trippers.go:580]     Audit-Id: 14cecab9-6461-4105-9148-d34c0b44a270
	I0728 18:40:16.262439    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:16.262447    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:16.262451    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:16.262455    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:16.262556    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:16.761072    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:16.761173    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:16.761187    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:16.761195    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:16.763703    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:16.763731    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:16.763743    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:16.763750    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:16.763754    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:16.763757    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:16.763762    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:16 GMT
	I0728 18:40:16.763782    4457 round_trippers.go:580]     Audit-Id: 0d47ea48-d37e-4fce-a81d-002d90f6a056
	I0728 18:40:16.763938    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:16.764194    4457 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:40:17.259207    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:17.259228    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:17.259239    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:17.259249    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:17.262189    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:17.262202    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:17.262209    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:17.262214    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:17.262217    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:17 GMT
	I0728 18:40:17.262222    4457 round_trippers.go:580]     Audit-Id: 9812172f-e586-4c55-b4ee-2bea8ae5de51
	I0728 18:40:17.262226    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:17.262230    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:17.262686    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:17.759473    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:17.759496    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:17.759508    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:17.759513    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:17.762434    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:17.762476    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:17.762491    4457 round_trippers.go:580]     Audit-Id: 43b3d3be-06fc-46db-8a5d-e3d4d5eb47e6
	I0728 18:40:17.762499    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:17.762507    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:17.762532    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:17.762542    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:17.762549    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:17 GMT
	I0728 18:40:17.762674    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:18.260311    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:18.260340    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:18.260351    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:18.260358    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:18.262900    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:18.262915    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:18.262922    4457 round_trippers.go:580]     Audit-Id: 9953baba-073c-48aa-a6e9-278f0713ae83
	I0728 18:40:18.262927    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:18.262930    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:18.262933    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:18.262936    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:18.262941    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:18 GMT
	I0728 18:40:18.263036    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:18.759061    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:18.759085    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:18.759096    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:18.759112    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:18.761682    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:18.761694    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:18.761700    4457 round_trippers.go:580]     Audit-Id: 15b329d9-3ab4-44cd-9049-aa7b2decadcc
	I0728 18:40:18.761705    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:18.761709    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:18.761712    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:18.761717    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:18.761723    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:18 GMT
	I0728 18:40:18.762177    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:19.259000    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:19.259018    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:19.259024    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:19.259030    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:19.261212    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:19.261224    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:19.261230    4457 round_trippers.go:580]     Audit-Id: 415b93b9-1aa3-44bd-a820-1208b592c064
	I0728 18:40:19.261233    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:19.261236    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:19.261239    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:19.261241    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:19.261247    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:19 GMT
	I0728 18:40:19.261358    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:19.261584    4457 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:40:19.759307    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:19.759329    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:19.759341    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:19.759350    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:19.761775    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:19.761788    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:19.761795    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:19 GMT
	I0728 18:40:19.761800    4457 round_trippers.go:580]     Audit-Id: 502bf989-c4d0-445b-b01d-1f1d535adf96
	I0728 18:40:19.761804    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:19.761807    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:19.761811    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:19.761814    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:19.762181    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:20.260567    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:20.260597    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:20.260609    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:20.260617    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:20.264657    4457 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 18:40:20.264674    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:20.264682    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:20.264685    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:20 GMT
	I0728 18:40:20.264690    4457 round_trippers.go:580]     Audit-Id: 717f088c-b9a2-4bec-a160-3a9f814ada36
	I0728 18:40:20.264694    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:20.264721    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:20.264725    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:20.264782    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:20.759050    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:20.759074    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:20.759087    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:20.759094    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:20.761793    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:20.761807    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:20.761815    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:20.761819    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:20.761823    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:20.761827    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:20 GMT
	I0728 18:40:20.761831    4457 round_trippers.go:580]     Audit-Id: c4d77fc1-3159-4dbe-8f91-6d29cc9ceefa
	I0728 18:40:20.761835    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:20.762217    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:21.259770    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:21.259798    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:21.259811    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:21.259819    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:21.262872    4457 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:40:21.262887    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:21.262897    4457 round_trippers.go:580]     Audit-Id: e08cd4a9-4dc5-4bfd-9602-71f114db843d
	I0728 18:40:21.262905    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:21.262921    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:21.262930    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:21.262935    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:21.262941    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:21 GMT
	I0728 18:40:21.263367    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:21.263621    4457 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:40:21.758899    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:21.758911    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:21.758917    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:21.758920    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:21.760397    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:21.760407    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:21.760412    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:21.760416    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:21 GMT
	I0728 18:40:21.760420    4457 round_trippers.go:580]     Audit-Id: 1845b54f-7048-420b-a9cc-345ce940e0f8
	I0728 18:40:21.760423    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:21.760427    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:21.760432    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:21.760518    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:22.260348    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:22.260380    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:22.260392    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:22.260474    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:22.264848    4457 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 18:40:22.264870    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:22.264881    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:22.264888    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:22.264894    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:22.264901    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:22 GMT
	I0728 18:40:22.264906    4457 round_trippers.go:580]     Audit-Id: a34d53ed-e866-4fba-b73c-e127bd7eee1c
	I0728 18:40:22.264912    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:22.265137    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:22.759929    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:22.759985    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:22.759997    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:22.760004    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:22.762275    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:22.762297    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:22.762304    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:22 GMT
	I0728 18:40:22.762309    4457 round_trippers.go:580]     Audit-Id: 4601913b-d210-4463-8d21-9d1c3a5d5d24
	I0728 18:40:22.762320    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:22.762325    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:22.762329    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:22.762332    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:22.762513    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:23.258971    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:23.258995    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:23.259006    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:23.259013    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:23.261485    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:23.261504    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:23.261512    4457 round_trippers.go:580]     Audit-Id: 09c1c715-a373-48d3-867c-e9bc394b5dae
	I0728 18:40:23.261516    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:23.261519    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:23.261529    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:23.261533    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:23.261537    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:23 GMT
	I0728 18:40:23.261656    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:23.759001    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:23.759024    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:23.759036    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:23.759043    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:23.761072    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:23.761090    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:23.761102    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:23.761110    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:23 GMT
	I0728 18:40:23.761116    4457 round_trippers.go:580]     Audit-Id: 8589fcb8-03dd-4451-ad7d-6a234a984cf4
	I0728 18:40:23.761121    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:23.761124    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:23.761127    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:23.761321    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:23.761508    4457 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:40:24.259348    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:24.259370    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:24.259381    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:24.259386    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:24.261950    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:24.261964    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:24.261971    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:24 GMT
	I0728 18:40:24.261976    4457 round_trippers.go:580]     Audit-Id: a347f63d-0547-4dc0-9887-ecb808f30b7b
	I0728 18:40:24.261979    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:24.261982    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:24.261985    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:24.261989    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:24.262374    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:24.759752    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:24.759776    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:24.759789    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:24.759798    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:24.762449    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:24.762465    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:24.762473    4457 round_trippers.go:580]     Audit-Id: b4270990-e2d6-4d91-9a90-e915982c91b7
	I0728 18:40:24.762477    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:24.762481    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:24.762484    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:24.762487    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:24.762491    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:24 GMT
	I0728 18:40:24.762565    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:25.258884    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:25.258912    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:25.258933    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:25.258940    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:25.261294    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:25.261304    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:25.261309    4457 round_trippers.go:580]     Audit-Id: 34079ddc-0650-4e56-8564-dc888c5c3890
	I0728 18:40:25.261313    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:25.261315    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:25.261318    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:25.261321    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:25.261323    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:25 GMT
	I0728 18:40:25.261501    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"397","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0728 18:40:25.261699    4457 node_ready.go:49] node "multinode-362000" has status "Ready":"True"
	I0728 18:40:25.261711    4457 node_ready.go:38] duration metric: took 14.503038736s for node "multinode-362000" to be "Ready" ...
	I0728 18:40:25.261719    4457 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:40:25.261757    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:40:25.261762    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:25.261768    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:25.261772    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:25.264121    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:25.264129    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:25.264134    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:25.264137    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:25.264141    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:25.264143    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:25 GMT
	I0728 18:40:25.264146    4457 round_trippers.go:580]     Audit-Id: 48379578-ba8f-4422-8fad-492237524d4c
	I0728 18:40:25.264149    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:25.265103    4457 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"404"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"402","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0728 18:40:25.267406    4457 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:25.267459    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:40:25.267464    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:25.267470    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:25.267474    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:25.269607    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:25.269614    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:25.269618    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:25.269621    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:25.269624    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:25.269626    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:25 GMT
	I0728 18:40:25.269629    4457 round_trippers.go:580]     Audit-Id: 1f7e9486-994e-4bba-9cc5-e40bb9316b31
	I0728 18:40:25.269631    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:25.269920    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"402","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0728 18:40:25.270183    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:25.270199    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:25.270206    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:25.270210    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:25.271439    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:25.271446    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:25.271451    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:25 GMT
	I0728 18:40:25.271454    4457 round_trippers.go:580]     Audit-Id: 18a2f255-e2cf-44ac-8a5f-27ec41ee30e6
	I0728 18:40:25.271466    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:25.271469    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:25.271471    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:25.271474    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:25.271730    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"397","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0728 18:40:25.769046    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:40:25.769075    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:25.769088    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:25.769095    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:25.771961    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:25.771999    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:25.772032    4457 round_trippers.go:580]     Audit-Id: 6d71ec48-2f1b-44c7-8856-88eb807ea518
	I0728 18:40:25.772047    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:25.772057    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:25.772062    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:25.772066    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:25.772069    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:26 GMT
	I0728 18:40:25.772230    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"402","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0728 18:40:25.772599    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:25.772608    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:25.772616    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:25.772625    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:25.773942    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:25.773951    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:25.773956    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:25.773963    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:26 GMT
	I0728 18:40:25.773967    4457 round_trippers.go:580]     Audit-Id: 879544d9-d129-4f7f-b284-bedb5f99a847
	I0728 18:40:25.773970    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:25.773973    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:25.773976    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:25.774043    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"397","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0728 18:40:26.268338    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:40:26.268362    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.268374    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.268380    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.270981    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:26.270995    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.271002    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:26 GMT
	I0728 18:40:26.271006    4457 round_trippers.go:580]     Audit-Id: 164daf9b-5def-4288-b3e8-3352b2b53d96
	I0728 18:40:26.271010    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.271013    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.271017    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.271021    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.271250    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"402","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0728 18:40:26.271627    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:26.271637    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.271647    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.271652    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.273317    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:26.273327    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.273333    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.273337    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:26 GMT
	I0728 18:40:26.273339    4457 round_trippers.go:580]     Audit-Id: 20ddf9f1-b697-42ab-95d3-a57551a5f208
	I0728 18:40:26.273342    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.273344    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.273347    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.273588    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"397","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0728 18:40:26.768053    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:40:26.768081    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.768178    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.768194    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.770949    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:26.770969    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.770981    4457 round_trippers.go:580]     Audit-Id: da51901c-9414-4d31-8654-2201f2fd0fb0
	I0728 18:40:26.770989    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.770995    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.771000    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.771005    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.771011    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.771233    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"416","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0728 18:40:26.771608    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:26.771619    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.771627    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.771632    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.773197    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:26.773207    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.773215    4457 round_trippers.go:580]     Audit-Id: f0b8501e-4f89-477d-9cef-d2242ada3831
	I0728 18:40:26.773219    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.773223    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.773227    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.773230    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.773234    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.773342    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"397","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0728 18:40:26.773509    4457 pod_ready.go:92] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"True"
	I0728 18:40:26.773518    4457 pod_ready.go:81] duration metric: took 1.506130085s for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.773524    4457 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.773559    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-362000
	I0728 18:40:26.773563    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.773569    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.773573    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.774648    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:26.774677    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.774682    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.774686    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.774688    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.774710    4457 round_trippers.go:580]     Audit-Id: a93d1b3b-c0dc-49e5-9e03-05b639755669
	I0728 18:40:26.774718    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.774721    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.774932    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-362000","namespace":"kube-system","uid":"7b75e781-36f1-4f6f-99a4-808974571bcd","resourceVersion":"337","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.mirror":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.seen":"2024-07-29T01:39:56.230156002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0728 18:40:26.775142    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:26.775149    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.775155    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.775159    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.776234    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:26.776240    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.776245    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.776248    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.776251    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.776254    4457 round_trippers.go:580]     Audit-Id: f465b160-4449-43a1-839f-6ac58d16f9f2
	I0728 18:40:26.776256    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.776259    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.776353    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"397","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0728 18:40:26.776506    4457 pod_ready.go:92] pod "etcd-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:40:26.776513    4457 pod_ready.go:81] duration metric: took 2.983958ms for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.776522    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.776552    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-362000
	I0728 18:40:26.776556    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.776562    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.776564    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.777481    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:40:26.777489    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.777494    4457 round_trippers.go:580]     Audit-Id: 9e66dcbf-7231-4e5a-beff-b5411a839d42
	I0728 18:40:26.777501    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.777504    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.777507    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.777511    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.777521    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.777674    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-362000","namespace":"kube-system","uid":"95b0fc9b-aad1-47ad-ae00-439b4e4b905a","resourceVersion":"392","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.mirror":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.seen":"2024-07-29T01:39:56.230158669Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0728 18:40:26.777905    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:26.777911    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.777917    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.777921    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.778916    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:40:26.778925    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.778933    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.778938    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.778944    4457 round_trippers.go:580]     Audit-Id: a29b0d70-1574-4be0-90fe-5dfab0fef028
	I0728 18:40:26.778948    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.778952    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.778955    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.779141    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"397","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0728 18:40:26.779305    4457 pod_ready.go:92] pod "kube-apiserver-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:40:26.779313    4457 pod_ready.go:81] duration metric: took 2.783586ms for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.779319    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.779353    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-362000
	I0728 18:40:26.779358    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.779364    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.779368    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.780324    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:40:26.780331    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.780335    4457 round_trippers.go:580]     Audit-Id: 602ba5d4-072e-4f9e-8a98-2bfc5daa09d3
	I0728 18:40:26.780339    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.780343    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.780346    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.780349    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.780352    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.780544    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-362000","namespace":"kube-system","uid":"5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9","resourceVersion":"391","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.mirror":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.seen":"2024-07-29T01:39:56.230159555Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0728 18:40:26.780770    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:26.780778    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.780783    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.780787    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.781618    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:40:26.781626    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.781632    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.781637    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.781647    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.781651    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.781654    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.781657    4457 round_trippers.go:580]     Audit-Id: c29fa219-546a-4898-9545-0969dd593e05
	I0728 18:40:26.781801    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"397","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0728 18:40:26.781958    4457 pod_ready.go:92] pod "kube-controller-manager-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:40:26.781965    4457 pod_ready.go:81] duration metric: took 2.640467ms for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.781970    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.782003    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:40:26.782008    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.782014    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.782017    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.783057    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:26.783066    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.783072    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.783077    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.783080    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.783083    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.783086    4457 round_trippers.go:580]     Audit-Id: 3354da0f-5ec4-49af-a334-d357f510f9be
	I0728 18:40:26.783090    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.783266    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tz5h5","generateName":"kube-proxy-","namespace":"kube-system","uid":"f791f783-464c-485b-9eda-97a5f857cca4","resourceVersion":"381","creationTimestamp":"2024-07-29T01:40:09Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0728 18:40:26.861008    4457 request.go:629] Waited for 77.48576ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:26.861115    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:26.861124    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.861135    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.861154    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.863738    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:26.863749    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.863756    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.863760    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.863764    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.863768    4457 round_trippers.go:580]     Audit-Id: 1fde7d92-a96a-4b09-8228-6f9f1d406488
	I0728 18:40:26.863773    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.863776    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.863951    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0728 18:40:26.864199    4457 pod_ready.go:92] pod "kube-proxy-tz5h5" in "kube-system" namespace has status "Ready":"True"
	I0728 18:40:26.864210    4457 pod_ready.go:81] duration metric: took 82.236902ms for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.864219    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:27.059279    4457 request.go:629] Waited for 195.00583ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:40:27.059435    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:40:27.059447    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:27.059458    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:27.059476    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:27.062273    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:27.062288    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:27.062301    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:27.062305    4457 round_trippers.go:580]     Audit-Id: 5fc54c2f-5bde-4311-8ecb-a36886b3ae53
	I0728 18:40:27.062309    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:27.062312    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:27.062316    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:27.062329    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:27.062438    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-362000","namespace":"kube-system","uid":"0299d0c0-d45d-45ee-9b8e-b5900e92694b","resourceVersion":"344","creationTimestamp":"2024-07-29T01:39:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.mirror":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.seen":"2024-07-29T01:39:50.867492603Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0728 18:40:27.259622    4457 request.go:629] Waited for 196.881917ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:27.259819    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:27.259830    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:27.259840    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:27.259846    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:27.262566    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:27.262582    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:27.262589    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:27.262595    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:27.262599    4457 round_trippers.go:580]     Audit-Id: 5da50520-f933-4df1-8168-3b169328594d
	I0728 18:40:27.262603    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:27.262607    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:27.262611    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:27.262724    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0728 18:40:27.262971    4457 pod_ready.go:92] pod "kube-scheduler-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:40:27.262982    4457 pod_ready.go:81] duration metric: took 398.764855ms for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:27.262995    4457 pod_ready.go:38] duration metric: took 2.001302979s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:40:27.263017    4457 api_server.go:52] waiting for apiserver process to appear ...
	I0728 18:40:27.263087    4457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:40:27.276508    4457 command_runner.go:130] > 2038
	I0728 18:40:27.276759    4457 api_server.go:72] duration metric: took 16.992042297s to wait for apiserver process to appear ...
	I0728 18:40:27.276767    4457 api_server.go:88] waiting for apiserver healthz status ...
	I0728 18:40:27.276782    4457 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:40:27.280519    4457 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0728 18:40:27.280557    4457 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0728 18:40:27.280562    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:27.280568    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:27.280572    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:27.281045    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:40:27.281053    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:27.281058    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:27.281061    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:27.281063    4457 round_trippers.go:580]     Content-Length: 263
	I0728 18:40:27.281067    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:27.281070    4457 round_trippers.go:580]     Audit-Id: f10d8667-9a4e-495c-9813-468f64fff001
	I0728 18:40:27.281073    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:27.281075    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:27.281124    4457 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0728 18:40:27.281174    4457 api_server.go:141] control plane version: v1.30.3
	I0728 18:40:27.281185    4457 api_server.go:131] duration metric: took 4.413875ms to wait for apiserver health ...
	I0728 18:40:27.281192    4457 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 18:40:27.459120    4457 request.go:629] Waited for 177.816231ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:40:27.459188    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:40:27.459196    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:27.459207    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:27.459213    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:27.462655    4457 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:40:27.462673    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:27.462680    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:27.462696    4457 round_trippers.go:580]     Audit-Id: 65569108-b6b0-48e7-a3b6-eaec6a9c3e0d
	I0728 18:40:27.462701    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:27.462705    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:27.462711    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:27.462713    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:27.463508    4457 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"423"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"416","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0728 18:40:27.464758    4457 system_pods.go:59] 8 kube-system pods found
	I0728 18:40:27.464773    4457 system_pods.go:61] "coredns-7db6d8ff4d-8npcw" [a0fcbb6f-1182-4d9e-bc04-456f1b4de1db] Running
	I0728 18:40:27.464777    4457 system_pods.go:61] "etcd-multinode-362000" [7b75e781-36f1-4f6f-99a4-808974571bcd] Running
	I0728 18:40:27.464780    4457 system_pods.go:61] "kindnet-4mw5v" [053773ee-043a-48e0-9f70-411430b19acd] Running
	I0728 18:40:27.464785    4457 system_pods.go:61] "kube-apiserver-multinode-362000" [95b0fc9b-aad1-47ad-ae00-439b4e4b905a] Running
	I0728 18:40:27.464790    4457 system_pods.go:61] "kube-controller-manager-multinode-362000" [5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9] Running
	I0728 18:40:27.464793    4457 system_pods.go:61] "kube-proxy-tz5h5" [f791f783-464c-485b-9eda-97a5f857cca4] Running
	I0728 18:40:27.464796    4457 system_pods.go:61] "kube-scheduler-multinode-362000" [0299d0c0-d45d-45ee-9b8e-b5900e92694b] Running
	I0728 18:40:27.464799    4457 system_pods.go:61] "storage-provisioner" [9032906f-5102-4224-b894-d541cf7d67e7] Running
	I0728 18:40:27.464803    4457 system_pods.go:74] duration metric: took 183.610259ms to wait for pod list to return data ...
	I0728 18:40:27.464807    4457 default_sa.go:34] waiting for default service account to be created ...
	I0728 18:40:27.659087    4457 request.go:629] Waited for 194.191537ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0728 18:40:27.659137    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0728 18:40:27.659145    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:27.659154    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:27.659161    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:27.661928    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:27.661943    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:27.661950    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:27.661954    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:27.661958    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:27.661971    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:27.661976    4457 round_trippers.go:580]     Content-Length: 261
	I0728 18:40:27.661979    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:27.661983    4457 round_trippers.go:580]     Audit-Id: 4aadf24d-c89f-41c9-8a53-a3e69516a618
	I0728 18:40:27.662004    4457 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"379c0dca-8465-4249-afbe-a226c72714a6","resourceVersion":"334","creationTimestamp":"2024-07-29T01:40:10Z"}}]}
	I0728 18:40:27.662149    4457 default_sa.go:45] found service account: "default"
	I0728 18:40:27.662162    4457 default_sa.go:55] duration metric: took 197.353594ms for default service account to be created ...
	I0728 18:40:27.662170    4457 system_pods.go:116] waiting for k8s-apps to be running ...
	I0728 18:40:27.859543    4457 request.go:629] Waited for 197.334207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:40:27.859667    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:40:27.859679    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:27.859690    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:27.859705    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:27.863099    4457 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:40:27.863114    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:27.863124    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:27.863132    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:27.863140    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:27.863145    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:27.863151    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:28 GMT
	I0728 18:40:27.863156    4457 round_trippers.go:580]     Audit-Id: 89372aba-9228-4ce7-8c3e-9ba696ef14dc
	I0728 18:40:27.863738    4457 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"416","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0728 18:40:27.864990    4457 system_pods.go:86] 8 kube-system pods found
	I0728 18:40:27.865001    4457 system_pods.go:89] "coredns-7db6d8ff4d-8npcw" [a0fcbb6f-1182-4d9e-bc04-456f1b4de1db] Running
	I0728 18:40:27.865004    4457 system_pods.go:89] "etcd-multinode-362000" [7b75e781-36f1-4f6f-99a4-808974571bcd] Running
	I0728 18:40:27.865008    4457 system_pods.go:89] "kindnet-4mw5v" [053773ee-043a-48e0-9f70-411430b19acd] Running
	I0728 18:40:27.865011    4457 system_pods.go:89] "kube-apiserver-multinode-362000" [95b0fc9b-aad1-47ad-ae00-439b4e4b905a] Running
	I0728 18:40:27.865014    4457 system_pods.go:89] "kube-controller-manager-multinode-362000" [5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9] Running
	I0728 18:40:27.865017    4457 system_pods.go:89] "kube-proxy-tz5h5" [f791f783-464c-485b-9eda-97a5f857cca4] Running
	I0728 18:40:27.865020    4457 system_pods.go:89] "kube-scheduler-multinode-362000" [0299d0c0-d45d-45ee-9b8e-b5900e92694b] Running
	I0728 18:40:27.865026    4457 system_pods.go:89] "storage-provisioner" [9032906f-5102-4224-b894-d541cf7d67e7] Running
	I0728 18:40:27.865031    4457 system_pods.go:126] duration metric: took 202.861517ms to wait for k8s-apps to be running ...
	I0728 18:40:27.865036    4457 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 18:40:27.865087    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:40:27.877189    4457 system_svc.go:56] duration metric: took 12.148245ms WaitForService to wait for kubelet
	I0728 18:40:27.877209    4457 kubeadm.go:582] duration metric: took 17.592503941s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:40:27.877222    4457 node_conditions.go:102] verifying NodePressure condition ...
	I0728 18:40:28.060545    4457 request.go:629] Waited for 183.186568ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0728 18:40:28.060617    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0728 18:40:28.060627    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:28.060638    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:28.060649    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:28.063062    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:28.063077    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:28.063084    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:28.063088    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:28 GMT
	I0728 18:40:28.063092    4457 round_trippers.go:580]     Audit-Id: 23ca1e57-1ef9-463d-9917-6293510499e5
	I0728 18:40:28.063095    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:28.063099    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:28.063103    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:28.063178    4457 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0728 18:40:28.063479    4457 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:40:28.063507    4457 node_conditions.go:123] node cpu capacity is 2
	I0728 18:40:28.063533    4457 node_conditions.go:105] duration metric: took 186.30824ms to run NodePressure ...
	I0728 18:40:28.063551    4457 start.go:241] waiting for startup goroutines ...
	I0728 18:40:28.063559    4457 start.go:246] waiting for cluster config update ...
	I0728 18:40:28.063575    4457 start.go:255] writing updated cluster config ...
	I0728 18:40:28.085419    4457 out.go:177] 
	I0728 18:40:28.108631    4457 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:40:28.108721    4457 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:40:28.131178    4457 out.go:177] * Starting "multinode-362000-m02" worker node in "multinode-362000" cluster
	I0728 18:40:28.173202    4457 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:40:28.173236    4457 cache.go:56] Caching tarball of preloaded images
	I0728 18:40:28.173439    4457 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 18:40:28.173458    4457 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:40:28.173553    4457 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:40:28.174529    4457 start.go:360] acquireMachinesLock for multinode-362000-m02: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:40:28.174649    4457 start.go:364] duration metric: took 96.396µs to acquireMachinesLock for "multinode-362000-m02"
	I0728 18:40:28.174677    4457 start.go:93] Provisioning new machine with config: &{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 18:40:28.174767    4457 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0728 18:40:28.196279    4457 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:40:28.196422    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:28.196454    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:28.206069    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52553
	I0728 18:40:28.206413    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:28.206778    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:28.206797    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:28.207019    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:28.207174    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetMachineName
	I0728 18:40:28.207296    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:28.207409    4457 start.go:159] libmachine.API.Create for "multinode-362000" (driver="hyperkit")
	I0728 18:40:28.207424    4457 client.go:168] LocalClient.Create starting
	I0728 18:40:28.207452    4457 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem
	I0728 18:40:28.207509    4457 main.go:141] libmachine: Decoding PEM data...
	I0728 18:40:28.207521    4457 main.go:141] libmachine: Parsing certificate...
	I0728 18:40:28.207570    4457 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem
	I0728 18:40:28.207609    4457 main.go:141] libmachine: Decoding PEM data...
	I0728 18:40:28.207621    4457 main.go:141] libmachine: Parsing certificate...
	I0728 18:40:28.207639    4457 main.go:141] libmachine: Running pre-create checks...
	I0728 18:40:28.207644    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .PreCreateCheck
	I0728 18:40:28.207727    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:28.207759    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetConfigRaw
	I0728 18:40:28.217427    4457 main.go:141] libmachine: Creating machine...
	I0728 18:40:28.217457    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .Create
	I0728 18:40:28.217681    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:28.217968    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | I0728 18:40:28.217664    4485 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 18:40:28.218070    4457 main.go:141] libmachine: (multinode-362000-m02) Downloading /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1006/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0728 18:40:28.417113    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | I0728 18:40:28.417024    4485 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa...
	I0728 18:40:28.458969    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | I0728 18:40:28.458896    4485 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/multinode-362000-m02.rawdisk...
	I0728 18:40:28.458979    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Writing magic tar header
	I0728 18:40:28.458991    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Writing SSH key tar header
	I0728 18:40:28.459389    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | I0728 18:40:28.459351    4485 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02 ...
	I0728 18:40:28.887087    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:28.887108    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/hyperkit.pid
	I0728 18:40:28.887119    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Using UUID 803737f6-60f1-4d1a-bdda-22c83e05ebd1
	I0728 18:40:28.912735    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Generated MAC 6:55:c7:17:95:12
	I0728 18:40:28.912762    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000
	I0728 18:40:28.912830    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"803737f6-60f1-4d1a-bdda-22c83e05ebd1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0728 18:40:28.912879    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"803737f6-60f1-4d1a-bdda-22c83e05ebd1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0728 18:40:28.912926    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "803737f6-60f1-4d1a-bdda-22c83e05ebd1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/multinode-362000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/bzimage,/Users/j
enkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"}
	I0728 18:40:28.912966    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 803737f6-60f1-4d1a-bdda-22c83e05ebd1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/multinode-362000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/mult
inode-362000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"
	I0728 18:40:28.912996    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 18:40:28.915928    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 DEBUG: hyperkit: Pid is 4486
	I0728 18:40:28.916380    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Attempt 0
	I0728 18:40:28.916404    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:28.916470    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:40:28.917361    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Searching for 6:55:c7:17:95:12 in /var/db/dhcpd_leases ...
	I0728 18:40:28.917452    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0728 18:40:28.917486    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:40:28.917522    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:40:28.917550    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:40:28.917570    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:40:28.917584    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:40:28.917592    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:40:28.917600    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:40:28.917608    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:40:28.917626    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:40:28.917639    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:40:28.917668    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:40:28.917685    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:40:28.923387    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 18:40:28.931573    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 18:40:28.932540    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:40:28.932561    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:40:28.932582    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:40:28.932598    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:40:29.320577    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 18:40:29.320592    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 18:40:29.435884    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:40:29.435905    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:40:29.435916    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:40:29.435943    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:40:29.436773    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 18:40:29.436784    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 18:40:30.918501    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Attempt 1
	I0728 18:40:30.918517    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:30.918587    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:40:30.919342    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Searching for 6:55:c7:17:95:12 in /var/db/dhcpd_leases ...
	I0728 18:40:30.919399    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0728 18:40:30.919420    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:40:30.919429    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:40:30.919438    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:40:30.919449    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:40:30.919490    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:40:30.919500    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:40:30.919512    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:40:30.919520    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:40:30.919527    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:40:30.919540    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:40:30.919547    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:40:30.919555    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:40:32.919568    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Attempt 2
	I0728 18:40:32.919590    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:32.919676    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:40:32.920412    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Searching for 6:55:c7:17:95:12 in /var/db/dhcpd_leases ...
	I0728 18:40:32.920465    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0728 18:40:32.920480    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:40:32.920489    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:40:32.920497    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:40:32.920503    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:40:32.920509    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:40:32.920517    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:40:32.920525    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:40:32.920530    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:40:32.920537    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:40:32.920542    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:40:32.920555    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:40:32.920567    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:40:34.921463    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Attempt 3
	I0728 18:40:34.921477    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:34.921597    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:40:34.922396    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Searching for 6:55:c7:17:95:12 in /var/db/dhcpd_leases ...
	I0728 18:40:34.922462    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0728 18:40:34.922476    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:40:34.922489    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:40:34.922497    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:40:34.922527    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:40:34.922538    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:40:34.922546    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:40:34.922554    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:40:34.922562    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:40:34.922571    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:40:34.922577    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:40:34.922590    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:40:34.922602    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:40:35.064243    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:35 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0728 18:40:35.064420    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:35 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0728 18:40:35.064430    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:35 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0728 18:40:35.086991    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:35 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0728 18:40:36.923506    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Attempt 4
	I0728 18:40:36.923520    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:36.923644    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:40:36.924397    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Searching for 6:55:c7:17:95:12 in /var/db/dhcpd_leases ...
	I0728 18:40:36.924465    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0728 18:40:36.924489    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:40:36.924514    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:40:36.924526    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:40:36.924535    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:40:36.924544    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:40:36.924551    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:40:36.924560    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:40:36.924567    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:40:36.924574    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:40:36.924587    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:40:36.924595    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:40:36.924604    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:40:38.926293    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Attempt 5
	I0728 18:40:38.926311    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:38.926416    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:40:38.927187    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Searching for 6:55:c7:17:95:12 in /var/db/dhcpd_leases ...
	I0728 18:40:38.927233    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0728 18:40:38.927266    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a84496}
	I0728 18:40:38.927293    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Found match: 6:55:c7:17:95:12
	I0728 18:40:38.927328    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | IP: 192.169.0.14
	I0728 18:40:38.927369    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetConfigRaw
	I0728 18:40:38.927999    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:38.928131    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:38.928238    4457 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0728 18:40:38.928247    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetState
	I0728 18:40:38.928325    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:38.928394    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:40:38.929157    4457 main.go:141] libmachine: Detecting operating system of created instance...
	I0728 18:40:38.929165    4457 main.go:141] libmachine: Waiting for SSH to be available...
	I0728 18:40:38.929169    4457 main.go:141] libmachine: Getting to WaitForSSH function...
	I0728 18:40:38.929174    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:38.929261    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:38.929352    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:38.929452    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:38.929561    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:38.929698    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:40:38.929908    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:40:38.929916    4457 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0728 18:40:39.947133    4457 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0728 18:40:42.997563    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:40:42.997576    4457 main.go:141] libmachine: Detecting the provisioner...
	I0728 18:40:42.997582    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:42.997714    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:42.997827    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:42.997912    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:42.997996    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:42.998124    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:40:42.998272    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:40:42.998280    4457 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0728 18:40:43.045978    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0728 18:40:43.046023    4457 main.go:141] libmachine: found compatible host: buildroot
	I0728 18:40:43.046030    4457 main.go:141] libmachine: Provisioning with buildroot...
	I0728 18:40:43.046037    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetMachineName
	I0728 18:40:43.046170    4457 buildroot.go:166] provisioning hostname "multinode-362000-m02"
	I0728 18:40:43.046181    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetMachineName
	I0728 18:40:43.046287    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:43.046370    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:43.046448    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.046527    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.046623    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:43.046740    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:40:43.046874    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:40:43.046882    4457 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-362000-m02 && echo "multinode-362000-m02" | sudo tee /etc/hostname
	I0728 18:40:43.105059    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-362000-m02
	
	I0728 18:40:43.105080    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:43.105211    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:43.105320    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.105409    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.105504    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:43.105643    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:40:43.105802    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:40:43.105819    4457 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-362000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-362000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-362000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 18:40:43.169701    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:40:43.169727    4457 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 18:40:43.169738    4457 buildroot.go:174] setting up certificates
	I0728 18:40:43.169744    4457 provision.go:84] configureAuth start
	I0728 18:40:43.169752    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetMachineName
	I0728 18:40:43.169898    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:40:43.170014    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:43.170106    4457 provision.go:143] copyHostCerts
	I0728 18:40:43.170135    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:40:43.170197    4457 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 18:40:43.170203    4457 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:40:43.170514    4457 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 18:40:43.170722    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:40:43.170768    4457 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 18:40:43.170773    4457 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:40:43.170856    4457 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 18:40:43.171009    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:40:43.171051    4457 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 18:40:43.171056    4457 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:40:43.171141    4457 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 18:40:43.171299    4457 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.multinode-362000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-362000-m02]
	I0728 18:40:43.298073    4457 provision.go:177] copyRemoteCerts
	I0728 18:40:43.298125    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 18:40:43.298138    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:43.298279    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:43.298379    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.298491    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:43.298573    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:40:43.329778    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 18:40:43.329849    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 18:40:43.349799    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 18:40:43.349871    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0728 18:40:43.369649    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 18:40:43.369722    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 18:40:43.389798    4457 provision.go:87] duration metric: took 220.050649ms to configureAuth
	I0728 18:40:43.389813    4457 buildroot.go:189] setting minikube options for container-runtime
	I0728 18:40:43.389957    4457 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:40:43.389970    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:43.390115    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:43.390206    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:43.390303    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.390377    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.390451    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:43.390588    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:40:43.390713    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:40:43.390721    4457 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 18:40:43.439593    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 18:40:43.439605    4457 buildroot.go:70] root file system type: tmpfs
	I0728 18:40:43.439687    4457 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 18:40:43.439700    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:43.439834    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:43.439933    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.440017    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.440100    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:43.440224    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:40:43.440371    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:40:43.440415    4457 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 18:40:43.501624    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 18:40:43.501641    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:43.501774    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:43.501873    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.501958    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.502046    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:43.502176    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:40:43.502316    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:40:43.502328    4457 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 18:40:45.035137    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0728 18:40:45.035154    4457 main.go:141] libmachine: Checking connection to Docker...
	I0728 18:40:45.035161    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetURL
	I0728 18:40:45.035314    4457 main.go:141] libmachine: Docker is up and running!
	I0728 18:40:45.035322    4457 main.go:141] libmachine: Reticulating splines...
	I0728 18:40:45.035327    4457 client.go:171] duration metric: took 16.828230217s to LocalClient.Create
	I0728 18:40:45.035339    4457 start.go:167] duration metric: took 16.828263949s to libmachine.API.Create "multinode-362000"
	I0728 18:40:45.035344    4457 start.go:293] postStartSetup for "multinode-362000-m02" (driver="hyperkit")
	I0728 18:40:45.035351    4457 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 18:40:45.035361    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:45.035510    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 18:40:45.035522    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:45.035604    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:45.035702    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:45.035791    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:45.035884    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:40:45.066439    4457 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 18:40:45.069494    4457 command_runner.go:130] > NAME=Buildroot
	I0728 18:40:45.069503    4457 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0728 18:40:45.069509    4457 command_runner.go:130] > ID=buildroot
	I0728 18:40:45.069515    4457 command_runner.go:130] > VERSION_ID=2023.02.9
	I0728 18:40:45.069519    4457 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0728 18:40:45.069605    4457 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 18:40:45.069615    4457 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 18:40:45.069711    4457 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 18:40:45.069900    4457 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 18:40:45.069906    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /etc/ssl/certs/15332.pem
	I0728 18:40:45.070111    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 18:40:45.077222    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:40:45.097606    4457 start.go:296] duration metric: took 62.254158ms for postStartSetup
	I0728 18:40:45.097632    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetConfigRaw
	I0728 18:40:45.098242    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:40:45.098370    4457 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:40:45.098726    4457 start.go:128] duration metric: took 16.924283943s to createHost
	I0728 18:40:45.098741    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:45.098832    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:45.098919    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:45.099003    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:45.099077    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:45.099185    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:40:45.099306    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:40:45.099313    4457 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0728 18:40:45.147578    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722217244.801768538
	
	I0728 18:40:45.147591    4457 fix.go:216] guest clock: 1722217244.801768538
	I0728 18:40:45.147596    4457 fix.go:229] Guest: 2024-07-28 18:40:44.801768538 -0700 PDT Remote: 2024-07-28 18:40:45.098735 -0700 PDT m=+82.457808845 (delta=-296.966462ms)
	I0728 18:40:45.147607    4457 fix.go:200] guest clock delta is within tolerance: -296.966462ms
	I0728 18:40:45.147611    4457 start.go:83] releasing machines lock for "multinode-362000-m02", held for 16.973286936s
	I0728 18:40:45.147628    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:45.147756    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:40:45.173962    4457 out.go:177] * Found network options:
	I0728 18:40:45.204336    4457 out.go:177]   - NO_PROXY=192.169.0.13
	W0728 18:40:45.229219    4457 proxy.go:119] fail to check proxy env: Error ip not in block
	I0728 18:40:45.229269    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:45.230244    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:45.230522    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:45.230627    4457 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 18:40:45.230663    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	W0728 18:40:45.230752    4457 proxy.go:119] fail to check proxy env: Error ip not in block
	I0728 18:40:45.230852    4457 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0728 18:40:45.230873    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:45.230901    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:45.231129    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:45.231167    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:45.231351    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:45.231375    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:45.231540    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:40:45.231579    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:45.231713    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:40:45.266285    4457 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0728 18:40:45.266505    4457 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 18:40:45.266561    4457 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 18:40:45.314306    4457 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0728 18:40:45.314769    4457 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0728 18:40:45.314791    4457 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 18:40:45.314800    4457 start.go:495] detecting cgroup driver to use...
	I0728 18:40:45.314867    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:40:45.330493    4457 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0728 18:40:45.330785    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 18:40:45.338853    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 18:40:45.347025    4457 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 18:40:45.347070    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 18:40:45.355439    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:40:45.363602    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 18:40:45.371577    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:40:45.380880    4457 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 18:40:45.389435    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 18:40:45.397472    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 18:40:45.405641    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 18:40:45.413729    4457 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 18:40:45.421006    4457 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0728 18:40:45.421160    4457 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 18:40:45.429796    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:40:45.518123    4457 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 18:40:45.537530    4457 start.go:495] detecting cgroup driver to use...
	I0728 18:40:45.537594    4457 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 18:40:45.549102    4457 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0728 18:40:45.549409    4457 command_runner.go:130] > [Unit]
	I0728 18:40:45.549419    4457 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 18:40:45.549424    4457 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 18:40:45.549429    4457 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0728 18:40:45.549434    4457 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0728 18:40:45.549438    4457 command_runner.go:130] > StartLimitBurst=3
	I0728 18:40:45.549442    4457 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 18:40:45.549445    4457 command_runner.go:130] > [Service]
	I0728 18:40:45.549449    4457 command_runner.go:130] > Type=notify
	I0728 18:40:45.549452    4457 command_runner.go:130] > Restart=on-failure
	I0728 18:40:45.549457    4457 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0728 18:40:45.549462    4457 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 18:40:45.549472    4457 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 18:40:45.549479    4457 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 18:40:45.549487    4457 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 18:40:45.549493    4457 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 18:40:45.549499    4457 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 18:40:45.549506    4457 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 18:40:45.549516    4457 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 18:40:45.549522    4457 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 18:40:45.549526    4457 command_runner.go:130] > ExecStart=
	I0728 18:40:45.549540    4457 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0728 18:40:45.549545    4457 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 18:40:45.549551    4457 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 18:40:45.549557    4457 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 18:40:45.549560    4457 command_runner.go:130] > LimitNOFILE=infinity
	I0728 18:40:45.549564    4457 command_runner.go:130] > LimitNPROC=infinity
	I0728 18:40:45.549567    4457 command_runner.go:130] > LimitCORE=infinity
	I0728 18:40:45.549572    4457 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 18:40:45.549576    4457 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 18:40:45.549580    4457 command_runner.go:130] > TasksMax=infinity
	I0728 18:40:45.549585    4457 command_runner.go:130] > TimeoutStartSec=0
	I0728 18:40:45.549590    4457 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 18:40:45.549594    4457 command_runner.go:130] > Delegate=yes
	I0728 18:40:45.549598    4457 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 18:40:45.549606    4457 command_runner.go:130] > KillMode=process
	I0728 18:40:45.549610    4457 command_runner.go:130] > [Install]
	I0728 18:40:45.549614    4457 command_runner.go:130] > WantedBy=multi-user.target
	I0728 18:40:45.549801    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:40:45.565724    4457 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 18:40:45.583324    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:40:45.593605    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:40:45.603827    4457 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 18:40:45.641120    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:40:45.651501    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:40:45.666232    4457 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 18:40:45.666521    4457 ssh_runner.go:195] Run: which cri-dockerd
	I0728 18:40:45.669466    4457 command_runner.go:130] > /usr/bin/cri-dockerd
	I0728 18:40:45.669624    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 18:40:45.676791    4457 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 18:40:45.691034    4457 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 18:40:45.784895    4457 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 18:40:45.882151    4457 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 18:40:45.882175    4457 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 18:40:45.896100    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:40:45.990118    4457 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:40:48.297597    4457 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.30750466s)
	I0728 18:40:48.297663    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0728 18:40:48.308016    4457 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0728 18:40:48.321063    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:40:48.331739    4457 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0728 18:40:48.422195    4457 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 18:40:48.531384    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:40:48.639310    4457 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0728 18:40:48.653793    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:40:48.664199    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:40:48.761525    4457 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0728 18:40:48.826095    4457 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 18:40:48.826173    4457 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 18:40:48.830343    4457 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0728 18:40:48.830369    4457 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0728 18:40:48.830376    4457 command_runner.go:130] > Device: 0,22	Inode: 818         Links: 1
	I0728 18:40:48.830384    4457 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0728 18:40:48.830389    4457 command_runner.go:130] > Access: 2024-07-29 01:40:48.429154603 +0000
	I0728 18:40:48.830394    4457 command_runner.go:130] > Modify: 2024-07-29 01:40:48.429154603 +0000
	I0728 18:40:48.830399    4457 command_runner.go:130] > Change: 2024-07-29 01:40:48.432154602 +0000
	I0728 18:40:48.830405    4457 command_runner.go:130] >  Birth: -
	I0728 18:40:48.830443    4457 start.go:563] Will wait 60s for crictl version
	I0728 18:40:48.830507    4457 ssh_runner.go:195] Run: which crictl
	I0728 18:40:48.833509    4457 command_runner.go:130] > /usr/bin/crictl
	I0728 18:40:48.833587    4457 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0728 18:40:48.859242    4457 command_runner.go:130] > Version:  0.1.0
	I0728 18:40:48.859256    4457 command_runner.go:130] > RuntimeName:  docker
	I0728 18:40:48.859292    4457 command_runner.go:130] > RuntimeVersion:  27.1.0
	I0728 18:40:48.859335    4457 command_runner.go:130] > RuntimeApiVersion:  v1
	I0728 18:40:48.860541    4457 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0728 18:40:48.860603    4457 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:40:48.877040    4457 command_runner.go:130] > 27.1.0
	I0728 18:40:48.877909    4457 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:40:48.899005    4457 command_runner.go:130] > 27.1.0
	I0728 18:40:48.923026    4457 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0728 18:40:48.944909    4457 out.go:177]   - env NO_PROXY=192.169.0.13
	I0728 18:40:48.970973    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:40:48.971189    4457 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0728 18:40:48.974398    4457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:40:48.983839    4457 mustload.go:65] Loading cluster: multinode-362000
	I0728 18:40:48.983985    4457 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:40:48.984226    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:48.984242    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:48.993127    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52577
	I0728 18:40:48.993494    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:48.993872    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:48.993890    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:48.994125    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:48.994249    4457 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:40:48.994332    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:48.994420    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:40:48.995350    4457 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:40:48.995612    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:48.995629    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:49.004619    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52579
	I0728 18:40:49.004963    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:49.005291    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:49.005303    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:49.005517    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:49.005628    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:40:49.005731    4457 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000 for IP: 192.169.0.14
	I0728 18:40:49.005737    4457 certs.go:194] generating shared ca certs ...
	I0728 18:40:49.005755    4457 certs.go:226] acquiring lock for ca certs: {Name:mk64aac07da96a39ae6165406ad142fbce2d0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:40:49.005928    4457 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key
	I0728 18:40:49.006014    4457 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key
	I0728 18:40:49.006024    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0728 18:40:49.006050    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0728 18:40:49.006068    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0728 18:40:49.006086    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0728 18:40:49.006170    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem (1338 bytes)
	W0728 18:40:49.006221    4457 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533_empty.pem, impossibly tiny 0 bytes
	I0728 18:40:49.006231    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem (1675 bytes)
	I0728 18:40:49.006266    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem (1078 bytes)
	I0728 18:40:49.006297    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem (1123 bytes)
	I0728 18:40:49.006332    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem (1679 bytes)
	I0728 18:40:49.006404    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:40:49.006442    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /usr/share/ca-certificates/15332.pem
	I0728 18:40:49.006467    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:40:49.006485    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem -> /usr/share/ca-certificates/1533.pem
	I0728 18:40:49.006509    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 18:40:49.026572    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 18:40:49.046453    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 18:40:49.065085    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 18:40:49.084898    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /usr/share/ca-certificates/15332.pem (1708 bytes)
	I0728 18:40:49.105463    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 18:40:49.125140    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem --> /usr/share/ca-certificates/1533.pem (1338 bytes)
	I0728 18:40:49.145922    4457 ssh_runner.go:195] Run: openssl version
	I0728 18:40:49.150071    4457 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0728 18:40:49.150248    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15332.pem && ln -fs /usr/share/ca-certificates/15332.pem /etc/ssl/certs/15332.pem"
	I0728 18:40:49.158617    4457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15332.pem
	I0728 18:40:49.161912    4457 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 00:57 /usr/share/ca-certificates/15332.pem
	I0728 18:40:49.162020    4457 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:57 /usr/share/ca-certificates/15332.pem
	I0728 18:40:49.162068    4457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15332.pem
	I0728 18:40:49.166203    4457 command_runner.go:130] > 3ec20f2e
	I0728 18:40:49.166303    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15332.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 18:40:49.174625    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 18:40:49.182878    4457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:40:49.186204    4457 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:40:49.186308    4457 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:40:49.186343    4457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:40:49.190766    4457 command_runner.go:130] > b5213941
	I0728 18:40:49.191001    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 18:40:49.200173    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1533.pem && ln -fs /usr/share/ca-certificates/1533.pem /etc/ssl/certs/1533.pem"
	I0728 18:40:49.208601    4457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1533.pem
	I0728 18:40:49.211883    4457 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 00:57 /usr/share/ca-certificates/1533.pem
	I0728 18:40:49.211977    4457 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:57 /usr/share/ca-certificates/1533.pem
	I0728 18:40:49.212026    4457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1533.pem
	I0728 18:40:49.216284    4457 command_runner.go:130] > 51391683
	I0728 18:40:49.216335    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1533.pem /etc/ssl/certs/51391683.0"
	I0728 18:40:49.224683    4457 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0728 18:40:49.227840    4457 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0728 18:40:49.227865    4457 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0728 18:40:49.227898    4457 kubeadm.go:934] updating node {m02 192.169.0.14 8443 v1.30.3 docker false true} ...
	I0728 18:40:49.227961    4457 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-362000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0728 18:40:49.228003    4457 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0728 18:40:49.235367    4457 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	I0728 18:40:49.235443    4457 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0728 18:40:49.235482    4457 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0728 18:40:49.243649    4457 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0728 18:40:49.243649    4457 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0728 18:40:49.243652    4457 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0728 18:40:49.243669    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0728 18:40:49.243672    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0728 18:40:49.243709    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:40:49.243759    4457 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0728 18:40:49.243759    4457 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0728 18:40:49.247026    4457 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0728 18:40:49.247047    4457 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0728 18:40:49.247063    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0728 18:40:49.257978    4457 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0728 18:40:49.276725    4457 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0728 18:40:49.276766    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0728 18:40:49.276772    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0728 18:40:49.276916    4457 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0728 18:40:49.298878    4457 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0728 18:40:49.298902    4457 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0728 18:40:49.298938    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0728 18:40:49.898411    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0728 18:40:49.906559    4457 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0728 18:40:49.921120    4457 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 18:40:49.934769    4457 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0728 18:40:49.937710    4457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:40:49.947952    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:40:50.047907    4457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:40:50.064915    4457 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:40:50.065204    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:50.065229    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:50.074077    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52581
	I0728 18:40:50.074432    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:50.074769    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:50.074781    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:50.074969    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:50.075085    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:40:50.075169    4457 start.go:317] joinCluster: &{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:40:50.075246    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0728 18:40:50.075261    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:40:50.075347    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:40:50.075454    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:40:50.075546    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:40:50.075649    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:40:50.156498    4457 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 1hcgh4.wmxieotdzhetb15n --discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 
	I0728 18:40:50.159134    4457 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 18:40:50.159180    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1hcgh4.wmxieotdzhetb15n --discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-362000-m02"
	I0728 18:40:50.190727    4457 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 18:40:50.278320    4457 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0728 18:40:50.278342    4457 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0728 18:40:50.308652    4457 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0728 18:40:50.308668    4457 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0728 18:40:50.308672    4457 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0728 18:40:50.412127    4457 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0728 18:40:50.913045    4457 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.388166ms
	I0728 18:40:50.913059    4457 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0728 18:40:51.428559    4457 command_runner.go:130] > This node has joined the cluster:
	I0728 18:40:51.428576    4457 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0728 18:40:51.428582    4457 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0728 18:40:51.428588    4457 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0728 18:40:51.429534    4457 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0728 18:40:51.429564    4457 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1hcgh4.wmxieotdzhetb15n --discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-362000-m02": (1.270389395s)
	I0728 18:40:51.429591    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0728 18:40:51.536688    4457 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0728 18:40:51.642016    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-362000-m02 minikube.k8s.io/updated_at=2024_07_28T18_40_51_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=multinode-362000 minikube.k8s.io/primary=false
	I0728 18:40:51.708434    4457 command_runner.go:130] > node/multinode-362000-m02 labeled
	I0728 18:40:51.708583    4457 start.go:319] duration metric: took 1.63344562s to joinCluster
	I0728 18:40:51.708631    4457 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 18:40:51.708804    4457 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:40:51.763047    4457 out.go:177] * Verifying Kubernetes components...
	I0728 18:40:51.784394    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:40:51.902394    4457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:40:51.914926    4457 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:40:51.915161    4457 kapi.go:59] client config for multinode-362000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6df5b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:40:51.915420    4457 node_ready.go:35] waiting up to 6m0s for node "multinode-362000-m02" to be "Ready" ...
	I0728 18:40:51.915463    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:51.915468    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:51.915477    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:51.915481    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:51.917354    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:51.917365    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:51.917372    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:51.917377    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:51.917381    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:51.917384    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:51.917387    4457 round_trippers.go:580]     Content-Length: 3978
	I0728 18:40:51.917397    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:52 GMT
	I0728 18:40:51.917400    4457 round_trippers.go:580]     Audit-Id: d12b139f-80ae-4718-a058-1ec650ed124a
	I0728 18:40:51.917457    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"470","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2954 chars]
	I0728 18:40:52.417647    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:52.417675    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:52.417687    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:52.417693    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:52.420314    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:52.420330    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:52.420338    4457 round_trippers.go:580]     Audit-Id: b383f664-c954-44fd-872c-d26e05a063a4
	I0728 18:40:52.420341    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:52.420346    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:52.420349    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:52.420354    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:52.420357    4457 round_trippers.go:580]     Content-Length: 3978
	I0728 18:40:52.420360    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:52 GMT
	I0728 18:40:52.420429    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"470","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2954 chars]
	I0728 18:40:52.916776    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:52.916808    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:52.916822    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:52.916831    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:52.919448    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:52.919463    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:52.919471    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:52.919475    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:52.919480    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:52.919484    4457 round_trippers.go:580]     Content-Length: 3978
	I0728 18:40:52.919493    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:53 GMT
	I0728 18:40:52.919499    4457 round_trippers.go:580]     Audit-Id: 308d3f88-d2ac-4b1d-ae60-de6330a739ae
	I0728 18:40:52.919505    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:52.919591    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"470","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2954 chars]
	I0728 18:40:53.416460    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:53.416488    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:53.416495    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:53.416499    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:53.418148    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:53.418157    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:53.418162    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:53 GMT
	I0728 18:40:53.418165    4457 round_trippers.go:580]     Audit-Id: 49d007cb-b6be-4970-b4c1-0ea39d37b196
	I0728 18:40:53.418169    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:53.418171    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:53.418174    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:53.418177    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:53.418179    4457 round_trippers.go:580]     Content-Length: 3978
	I0728 18:40:53.418243    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"470","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2954 chars]
	I0728 18:40:53.916759    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:53.916789    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:53.916841    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:53.916851    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:53.919633    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:53.919651    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:53.919659    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:53.919668    4457 round_trippers.go:580]     Content-Length: 3978
	I0728 18:40:53.919673    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:54 GMT
	I0728 18:40:53.919679    4457 round_trippers.go:580]     Audit-Id: eecf0534-e934-4c67-8661-ae888824bd41
	I0728 18:40:53.919685    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:53.919699    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:53.919705    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:53.919777    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"470","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2954 chars]
	I0728 18:40:53.919962    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:40:54.415741    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:54.415767    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:54.415778    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:54.415783    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:54.418337    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:54.418353    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:54.418362    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:54 GMT
	I0728 18:40:54.418379    4457 round_trippers.go:580]     Audit-Id: eaa3ad15-28e0-4470-9088-01c7917f0353
	I0728 18:40:54.418388    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:54.418393    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:54.418402    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:54.418407    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:54.418411    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:54.418478    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:54.915568    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:54.915594    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:54.915606    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:54.915610    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:54.917942    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:54.917957    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:54.917964    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:54.917968    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:54.917972    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:55 GMT
	I0728 18:40:54.917975    4457 round_trippers.go:580]     Audit-Id: 6660786f-3d59-43ee-b51b-7f2426f7d62f
	I0728 18:40:54.917979    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:54.917983    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:54.917986    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:54.918139    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:55.416396    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:55.416418    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:55.416425    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:55.416431    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:55.418358    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:55.418371    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:55.418378    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:55.418383    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:55 GMT
	I0728 18:40:55.418387    4457 round_trippers.go:580]     Audit-Id: 5ae9f345-b9bd-453f-b77f-c2370d002e68
	I0728 18:40:55.418391    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:55.418406    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:55.418411    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:55.418414    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:55.418466    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:55.915660    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:55.915675    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:55.915723    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:55.915727    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:55.917527    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:55.917538    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:55.917544    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:55.917548    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:56 GMT
	I0728 18:40:55.917551    4457 round_trippers.go:580]     Audit-Id: 082582db-68cc-4bac-add8-46564d4ab3d3
	I0728 18:40:55.917553    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:55.917556    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:55.917558    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:55.917561    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:55.917619    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:56.415461    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:56.415478    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:56.415485    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:56.415489    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:56.417302    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:56.417312    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:56.417318    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:56.417342    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:56.417349    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:56.417352    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:56.417355    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:56 GMT
	I0728 18:40:56.417358    4457 round_trippers.go:580]     Audit-Id: 63d664b3-59f4-4874-82f2-c8b6d40c8bee
	I0728 18:40:56.417360    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:56.417418    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:56.417566    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:40:56.916438    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:56.916455    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:56.916501    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:56.916506    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:56.917979    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:56.917989    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:56.917995    4457 round_trippers.go:580]     Audit-Id: ef6c2ccf-0d71-43c6-845f-6d98899f4eb5
	I0728 18:40:56.917999    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:56.918003    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:56.918011    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:56.918015    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:56.918024    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:56.918027    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:57 GMT
	I0728 18:40:56.918079    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:57.415969    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:57.415987    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:57.415995    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:57.415999    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:57.417727    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:57.417738    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:57.417744    4457 round_trippers.go:580]     Audit-Id: bca505a7-a0b6-4bc3-9b71-1541345897ab
	I0728 18:40:57.417752    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:57.417756    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:57.417758    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:57.417761    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:57.417764    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:57.417767    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:57 GMT
	I0728 18:40:57.417816    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:57.916250    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:57.916274    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:57.916281    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:57.916285    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:57.918051    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:57.918069    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:57.918079    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:58 GMT
	I0728 18:40:57.918085    4457 round_trippers.go:580]     Audit-Id: 1a4f1da9-1dab-4ac4-baf1-c1234fc0ff36
	I0728 18:40:57.918091    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:57.918101    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:57.918108    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:57.918114    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:57.918117    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:57.918234    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:58.415857    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:58.415905    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:58.415915    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:58.415920    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:58.417570    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:58.417586    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:58.417598    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:58 GMT
	I0728 18:40:58.417609    4457 round_trippers.go:580]     Audit-Id: 6236d266-a312-4cfb-be28-cd41c4b6a7d0
	I0728 18:40:58.417620    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:58.417627    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:58.417631    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:58.417635    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:58.417638    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:58.417691    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:58.417845    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:40:58.916828    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:58.916843    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:58.916850    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:58.916854    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:58.918226    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:58.918237    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:58.918247    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:58.918251    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:58.918256    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:58.918258    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:59 GMT
	I0728 18:40:58.918261    4457 round_trippers.go:580]     Audit-Id: 53860841-ae52-4770-b024-2915b5fd1f6f
	I0728 18:40:58.918263    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:58.918266    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:58.918363    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:59.415563    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:59.415594    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:59.415629    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:59.415638    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:59.418139    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:59.418157    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:59.418165    4457 round_trippers.go:580]     Audit-Id: f6fd96da-6d0a-48a8-92fa-0cfce8390021
	I0728 18:40:59.418169    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:59.418175    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:59.418179    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:59.418182    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:59.418186    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:59.418199    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:59 GMT
	I0728 18:40:59.418260    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:59.916342    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:59.916358    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:59.916365    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:59.916368    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:59.917948    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:59.917958    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:59.917964    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:59.917968    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:59.917970    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:59.917972    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:00 GMT
	I0728 18:40:59.917975    4457 round_trippers.go:580]     Audit-Id: 8e49f51d-a92f-488b-a20f-2634a5a0cb1f
	I0728 18:40:59.917978    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:59.917990    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:59.918029    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:41:00.416214    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:00.416243    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:00.416253    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:00.416258    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:00.417937    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:00.417961    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:00.417968    4457 round_trippers.go:580]     Audit-Id: 523b0305-eeac-4ae7-81de-a80936ca2113
	I0728 18:41:00.417974    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:00.417980    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:00.417991    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:00.418012    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:00.418019    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:41:00.418022    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:00 GMT
	I0728 18:41:00.418079    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:41:00.418265    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:41:00.916225    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:00.916255    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:00.916263    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:00.916271    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:00.917771    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:00.917783    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:00.917788    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:41:00.917792    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:01 GMT
	I0728 18:41:00.917795    4457 round_trippers.go:580]     Audit-Id: 09d72ceb-c858-49fe-9bb1-3f38f9aa7cf7
	I0728 18:41:00.917798    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:00.917801    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:00.917803    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:00.917805    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:00.917855    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:41:01.415618    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:01.415635    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:01.415686    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:01.415690    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:01.417196    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:01.417211    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:01.417217    4457 round_trippers.go:580]     Audit-Id: a9841fa3-7ff8-4139-82f8-69daa0bba949
	I0728 18:41:01.417221    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:01.417223    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:01.417226    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:01.417230    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:01.417232    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:01 GMT
	I0728 18:41:01.417300    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:01.916477    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:01.916513    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:01.916525    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:01.916533    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:01.919024    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:01.919047    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:01.919054    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:01.919058    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:01.919091    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:01.919098    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:02 GMT
	I0728 18:41:01.919102    4457 round_trippers.go:580]     Audit-Id: cd9eb789-5513-4485-a4a2-c02a70f6ff9b
	I0728 18:41:01.919107    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:01.919319    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:02.416280    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:02.416308    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:02.416318    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:02.416325    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:02.419423    4457 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:41:02.419439    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:02.419446    4457 round_trippers.go:580]     Audit-Id: 77d6588a-a0c9-49ad-9bea-f4995beaa0a4
	I0728 18:41:02.419452    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:02.419456    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:02.419459    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:02.419472    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:02.419483    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:02 GMT
	I0728 18:41:02.419572    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:02.419792    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:41:02.915682    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:02.915708    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:02.915720    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:02.915725    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:02.918412    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:02.918431    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:02.918438    4457 round_trippers.go:580]     Audit-Id: cae8137c-c9ed-4122-9d64-6b2d98249f2f
	I0728 18:41:02.918442    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:02.918446    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:02.918451    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:02.918454    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:02.918458    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:03 GMT
	I0728 18:41:02.918549    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:03.416917    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:03.416947    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:03.417001    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:03.417014    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:03.419558    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:03.419572    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:03.419579    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:03.419584    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:03 GMT
	I0728 18:41:03.419587    4457 round_trippers.go:580]     Audit-Id: fcee71fb-b532-4412-b39b-c31e1c6abbeb
	I0728 18:41:03.419590    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:03.419595    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:03.419600    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:03.419684    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:03.916767    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:03.916795    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:03.916806    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:03.916813    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:03.919492    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:03.919511    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:03.919519    4457 round_trippers.go:580]     Audit-Id: 6202dce9-fba1-4c3b-8a72-4afb52afb468
	I0728 18:41:03.919523    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:03.919554    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:03.919564    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:03.919571    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:03.919575    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:04 GMT
	I0728 18:41:03.919682    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:04.415878    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:04.415906    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:04.415917    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:04.415925    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:04.418626    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:04.418647    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:04.418655    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:04.418661    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:04.418665    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:04.418668    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:04.418674    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:04 GMT
	I0728 18:41:04.418677    4457 round_trippers.go:580]     Audit-Id: cd1cfba9-fce7-49de-9e8f-26cb52d34aa6
	I0728 18:41:04.418747    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:04.916227    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:04.916249    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:04.916257    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:04.916263    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:04.918262    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:04.918276    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:04.918284    4457 round_trippers.go:580]     Audit-Id: 651f1d5c-8b97-4bb6-8469-b801c0e4c7da
	I0728 18:41:04.918290    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:04.918294    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:04.918300    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:04.918303    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:04.918310    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:05 GMT
	I0728 18:41:04.918394    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:04.918613    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:41:05.415597    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:05.415623    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:05.415635    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:05.415642    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:05.418163    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:05.418178    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:05.418196    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:05.418201    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:05.418240    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:05 GMT
	I0728 18:41:05.418249    4457 round_trippers.go:580]     Audit-Id: 805a3ed5-4c00-42f1-bcdd-41bc48301f81
	I0728 18:41:05.418252    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:05.418256    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:05.418442    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:05.916769    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:05.916827    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:05.916846    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:05.916856    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:05.919019    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:05.919032    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:05.919039    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:05.919043    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:05.919048    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:06 GMT
	I0728 18:41:05.919052    4457 round_trippers.go:580]     Audit-Id: 52130e5d-ffde-45a4-a971-eab6d44a0ed6
	I0728 18:41:05.919056    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:05.919078    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:05.919315    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:06.416652    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:06.416678    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:06.416690    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:06.416698    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:06.419246    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:06.419261    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:06.419268    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:06 GMT
	I0728 18:41:06.419273    4457 round_trippers.go:580]     Audit-Id: 00266202-acdd-48c9-815e-6fb223f16957
	I0728 18:41:06.419276    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:06.419280    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:06.419312    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:06.419333    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:06.419551    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:06.916999    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:06.917028    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:06.917039    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:06.917047    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:06.919459    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:06.919474    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:06.919481    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:07 GMT
	I0728 18:41:06.919486    4457 round_trippers.go:580]     Audit-Id: 20a92b14-dd96-4db8-940d-85d9c0a2a810
	I0728 18:41:06.919491    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:06.919494    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:06.919498    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:06.919501    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:06.919713    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:06.919938    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:41:07.416877    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:07.416904    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:07.416915    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:07.416929    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:07.419685    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:07.419699    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:07.419706    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:07.419732    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:07.419740    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:07.419744    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:07 GMT
	I0728 18:41:07.419748    4457 round_trippers.go:580]     Audit-Id: 1966e958-252f-4ca1-874e-729a3892c519
	I0728 18:41:07.419751    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:07.419885    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:07.915407    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:07.915434    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:07.915445    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:07.915452    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:07.918193    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:07.918207    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:07.918214    4457 round_trippers.go:580]     Audit-Id: 7b6f4c86-e2f2-499e-a6e9-5005f194f3af
	I0728 18:41:07.918227    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:07.918233    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:07.918238    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:07.918246    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:07.918249    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:08 GMT
	I0728 18:41:07.918526    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:08.417366    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:08.417392    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:08.417403    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:08.417409    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:08.419926    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:08.419943    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:08.419951    4457 round_trippers.go:580]     Audit-Id: ee3ce7eb-aaaa-4ee2-bb27-a193c0eeeed4
	I0728 18:41:08.419955    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:08.419960    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:08.419965    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:08.419969    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:08.419973    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:08 GMT
	I0728 18:41:08.420197    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:08.916465    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:08.916488    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:08.916500    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:08.916506    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:08.918896    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:08.918913    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:08.918920    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:08.918925    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:08.918929    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:08.918932    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:08.918936    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:09 GMT
	I0728 18:41:08.918941    4457 round_trippers.go:580]     Audit-Id: 58f30bdd-4b54-4b5d-a953-7f188f39b30b
	I0728 18:41:08.919050    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:09.416532    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:09.416554    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:09.416566    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:09.416574    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:09.418853    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:09.418868    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:09.418875    4457 round_trippers.go:580]     Audit-Id: c3eac4a3-9b45-45db-a1fb-25fda462c318
	I0728 18:41:09.418880    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:09.418910    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:09.418915    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:09.418921    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:09.418925    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:09 GMT
	I0728 18:41:09.419024    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:09.419238    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:41:09.916622    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:09.916642    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:09.916712    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:09.916721    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:09.918446    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:09.918461    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:09.918468    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:09.918472    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:09.918494    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:10 GMT
	I0728 18:41:09.918500    4457 round_trippers.go:580]     Audit-Id: c7c8034a-1726-4389-9683-dda851a06a30
	I0728 18:41:09.918503    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:09.918508    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:09.918659    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:10.415621    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:10.415650    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:10.415661    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:10.415668    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:10.418352    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:10.418367    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:10.418374    4457 round_trippers.go:580]     Audit-Id: 8ba843fa-a401-43c0-b5bf-cfb675340351
	I0728 18:41:10.418378    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:10.418385    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:10.418389    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:10.418394    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:10.418397    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:10 GMT
	I0728 18:41:10.418497    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:10.916408    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:10.916433    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:10.916523    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:10.916533    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:10.918927    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:10.918939    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:10.918946    4457 round_trippers.go:580]     Audit-Id: 5758486e-fdad-4bd6-b190-2d6279a70cce
	I0728 18:41:10.918951    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:10.918955    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:10.918958    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:10.918972    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:10.918977    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:11 GMT
	I0728 18:41:10.919210    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:11.415374    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:11.415398    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:11.415410    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:11.415418    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:11.418115    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:11.418129    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:11.418136    4457 round_trippers.go:580]     Audit-Id: 4090ba38-eff8-421a-91e6-81cd59f054a1
	I0728 18:41:11.418141    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:11.418148    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:11.418153    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:11.418157    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:11.418161    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:11 GMT
	I0728 18:41:11.418419    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:11.915591    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:11.915615    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:11.915627    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:11.915633    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:11.918225    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:11.918246    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:11.918257    4457 round_trippers.go:580]     Audit-Id: 6ca26a0d-28ad-4b81-b9af-5f08435c100b
	I0728 18:41:11.918265    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:11.918273    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:11.918276    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:11.918281    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:11.918297    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:12 GMT
	I0728 18:41:11.918527    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:11.918742    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:41:12.415206    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:12.415218    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:12.415225    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:12.415229    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:12.416645    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:12.416654    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:12.416659    4457 round_trippers.go:580]     Audit-Id: c3eaaf8c-b3c5-46b6-a7da-cb7674fd9ce2
	I0728 18:41:12.416663    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:12.416666    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:12.416670    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:12.416673    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:12.416675    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:12 GMT
	I0728 18:41:12.416729    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:12.915496    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:12.915521    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:12.915533    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:12.915537    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:12.918435    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:12.918450    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:12.918457    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:12.918461    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:12.918466    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:12.918470    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:13 GMT
	I0728 18:41:12.918473    4457 round_trippers.go:580]     Audit-Id: 8dc254ff-923f-4776-a405-351862f2b98e
	I0728 18:41:12.918477    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:12.918591    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:13.415401    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:13.415440    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:13.415455    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:13.415535    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:13.418180    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:13.418194    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:13.418202    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:13.418206    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:13 GMT
	I0728 18:41:13.418209    4457 round_trippers.go:580]     Audit-Id: f3f9fc94-e5f5-4700-92c4-a321937068ce
	I0728 18:41:13.418213    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:13.418215    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:13.418219    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:13.418288    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:13.915929    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:13.915952    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:13.915965    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:13.915971    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:13.918663    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:13.918683    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:13.918724    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:13.918731    4457 round_trippers.go:580]     Audit-Id: 43340662-41c2-429d-8f3f-e2324c253607
	I0728 18:41:13.918734    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:13.918738    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:13.918742    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:13.918746    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:13.918826    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:13.919039    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:41:14.415323    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:14.415347    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.415359    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.415365    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.418009    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:14.418026    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.418033    4457 round_trippers.go:580]     Audit-Id: 6d7c9f3f-d214-4077-bcc5-358f585101d4
	I0728 18:41:14.418040    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.418044    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.418049    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.418053    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.418057    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.418157    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"512","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3142 chars]
	I0728 18:41:14.418363    4457 node_ready.go:49] node "multinode-362000-m02" has status "Ready":"True"
	I0728 18:41:14.418380    4457 node_ready.go:38] duration metric: took 22.503390694s for node "multinode-362000-m02" to be "Ready" ...
	I0728 18:41:14.418388    4457 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:41:14.418432    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:41:14.418438    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.418445    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.418451    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.423078    4457 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 18:41:14.423088    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.423093    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.423095    4457 round_trippers.go:580]     Audit-Id: cf39922b-fddb-47d2-9d4f-545639471fc8
	I0728 18:41:14.423098    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.423100    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.423103    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.423105    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.423726    4457 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"512"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"416","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70370 chars]
	I0728 18:41:14.425296    4457 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.425340    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:41:14.425345    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.425351    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.425353    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.426799    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:14.426810    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.426817    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.426820    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.426823    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.426825    4457 round_trippers.go:580]     Audit-Id: c27b7ab2-38db-48f7-ba5b-f2602d93b372
	I0728 18:41:14.426828    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.426831    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.426939    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"416","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0728 18:41:14.427189    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:41:14.427195    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.427201    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.427204    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.428431    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:14.428439    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.428444    4457 round_trippers.go:580]     Audit-Id: c72c4ca1-1fb0-456a-9f22-745a31a724ba
	I0728 18:41:14.428446    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.428449    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.428452    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.428454    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.428458    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.428521    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0728 18:41:14.428683    4457 pod_ready.go:92] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"True"
	I0728 18:41:14.428692    4457 pod_ready.go:81] duration metric: took 3.385887ms for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.428698    4457 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.428731    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-362000
	I0728 18:41:14.428737    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.428742    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.428745    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.429730    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:41:14.429740    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.429745    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.429749    4457 round_trippers.go:580]     Audit-Id: 51bde7d2-1fa7-405e-8e23-295a5099bc1f
	I0728 18:41:14.429752    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.429755    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.429758    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.429761    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.429824    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-362000","namespace":"kube-system","uid":"7b75e781-36f1-4f6f-99a4-808974571bcd","resourceVersion":"337","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.mirror":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.seen":"2024-07-29T01:39:56.230156002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0728 18:41:14.430059    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:41:14.430066    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.430071    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.430074    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.431039    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:41:14.431048    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.431056    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.431060    4457 round_trippers.go:580]     Audit-Id: 252cc3c9-6dd1-4613-b8ec-19f32b1bf0bb
	I0728 18:41:14.431080    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.431086    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.431089    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.431091    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.431297    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0728 18:41:14.431456    4457 pod_ready.go:92] pod "etcd-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:41:14.431467    4457 pod_ready.go:81] duration metric: took 2.761563ms for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.431477    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.431509    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-362000
	I0728 18:41:14.431514    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.431520    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.431523    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.432486    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:41:14.432494    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.432500    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.432506    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.432512    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.432518    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.432524    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.432529    4457 round_trippers.go:580]     Audit-Id: d3984880-f623-4a7f-8c2d-3d8575b6c911
	I0728 18:41:14.432690    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-362000","namespace":"kube-system","uid":"95b0fc9b-aad1-47ad-ae00-439b4e4b905a","resourceVersion":"392","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.mirror":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.seen":"2024-07-29T01:39:56.230158669Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0728 18:41:14.432918    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:41:14.432925    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.432931    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.432935    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.433769    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:41:14.433776    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.433781    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.433785    4457 round_trippers.go:580]     Audit-Id: 0408274a-afe6-4996-803b-0511bea524d5
	I0728 18:41:14.433789    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.433794    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.433798    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.433802    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.433912    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0728 18:41:14.434071    4457 pod_ready.go:92] pod "kube-apiserver-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:41:14.434078    4457 pod_ready.go:81] duration metric: took 2.597455ms for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.434084    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.434113    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-362000
	I0728 18:41:14.434118    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.434123    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.434127    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.435161    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:14.435169    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.435175    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.435180    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.435184    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.435188    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.435192    4457 round_trippers.go:580]     Audit-Id: a87fe60d-e026-4a21-a54d-c3d5e4ecb353
	I0728 18:41:14.435200    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.435402    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-362000","namespace":"kube-system","uid":"5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9","resourceVersion":"391","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.mirror":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.seen":"2024-07-29T01:39:56.230159555Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0728 18:41:14.435626    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:41:14.435633    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.435639    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.435643    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.436663    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:14.436679    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.436685    4457 round_trippers.go:580]     Audit-Id: da88a2df-3fd0-46ac-9d49-67d9d6cb79de
	I0728 18:41:14.436689    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.436692    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.436695    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.436698    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.436700    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.436884    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0728 18:41:14.437038    4457 pod_ready.go:92] pod "kube-controller-manager-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:41:14.437048    4457 pod_ready.go:81] duration metric: took 2.956971ms for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.437055    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dzz6p" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.615374    4457 request.go:629] Waited for 178.266998ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzz6p
	I0728 18:41:14.615525    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzz6p
	I0728 18:41:14.615539    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.615551    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.615558    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.617847    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:14.617864    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.617874    4457 round_trippers.go:580]     Audit-Id: d4c0f16b-8a44-402d-92da-e2de9a02f0e1
	I0728 18:41:14.617881    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.617887    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.617891    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.617894    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.617913    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.618029    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dzz6p","generateName":"kube-proxy-","namespace":"kube-system","uid":"577d6ba2-e17a-426f-8315-1688766fa435","resourceVersion":"488","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0728 18:41:14.816222    4457 request.go:629] Waited for 197.76697ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:14.816296    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:14.816310    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.816332    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.816344    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.818980    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:14.818996    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.819004    4457 round_trippers.go:580]     Audit-Id: 1fab62a9-f3a2-4524-bba6-7168d3c40b1c
	I0728 18:41:14.819008    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.819012    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.819016    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.819019    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.819025    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:15 GMT
	I0728 18:41:14.819107    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"512","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3142 chars]
	I0728 18:41:14.819314    4457 pod_ready.go:92] pod "kube-proxy-dzz6p" in "kube-system" namespace has status "Ready":"True"
	I0728 18:41:14.819325    4457 pod_ready.go:81] duration metric: took 382.271949ms for pod "kube-proxy-dzz6p" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.819337    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:15.017330    4457 request.go:629] Waited for 197.92175ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:41:15.017492    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:41:15.017513    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:15.017524    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:15.017533    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:15.020083    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:15.020101    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:15.020109    4457 round_trippers.go:580]     Audit-Id: 52d5102d-ccfc-4431-9d8c-293a1bf9e524
	I0728 18:41:15.020113    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:15.020119    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:15.020124    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:15.020128    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:15.020132    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:15 GMT
	I0728 18:41:15.020236    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tz5h5","generateName":"kube-proxy-","namespace":"kube-system","uid":"f791f783-464c-485b-9eda-97a5f857cca4","resourceVersion":"381","creationTimestamp":"2024-07-29T01:40:09Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0728 18:41:15.217317    4457 request.go:629] Waited for 196.739764ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:41:15.217503    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:41:15.217518    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:15.217537    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:15.217546    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:15.220476    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:15.220489    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:15.220496    4457 round_trippers.go:580]     Audit-Id: 2ba4e00c-940a-4166-8a8d-113da5ef2a56
	I0728 18:41:15.220502    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:15.220506    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:15.220511    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:15.220515    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:15.220519    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:15 GMT
	I0728 18:41:15.220961    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0728 18:41:15.221221    4457 pod_ready.go:92] pod "kube-proxy-tz5h5" in "kube-system" namespace has status "Ready":"True"
	I0728 18:41:15.221234    4457 pod_ready.go:81] duration metric: took 401.897714ms for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:15.221243    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:15.415671    4457 request.go:629] Waited for 194.352358ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:41:15.415801    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:41:15.415812    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:15.415822    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:15.415830    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:15.418370    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:15.418388    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:15.418396    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:15 GMT
	I0728 18:41:15.418400    4457 round_trippers.go:580]     Audit-Id: a4b392a9-87eb-4d6b-a818-7b8efb7d5bba
	I0728 18:41:15.418404    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:15.418407    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:15.418410    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:15.418414    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:15.418553    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-362000","namespace":"kube-system","uid":"0299d0c0-d45d-45ee-9b8e-b5900e92694b","resourceVersion":"344","creationTimestamp":"2024-07-29T01:39:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.mirror":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.seen":"2024-07-29T01:39:50.867492603Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0728 18:41:15.616023    4457 request.go:629] Waited for 197.174221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:41:15.616091    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:41:15.616175    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:15.616189    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:15.616196    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:15.618751    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:15.618767    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:15.618774    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:15.618779    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:15.618782    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:15.618786    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:15 GMT
	I0728 18:41:15.618797    4457 round_trippers.go:580]     Audit-Id: e98369b6-dca2-4e64-96fa-466b02509d28
	I0728 18:41:15.618802    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:15.618868    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0728 18:41:15.619106    4457 pod_ready.go:92] pod "kube-scheduler-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:41:15.619118    4457 pod_ready.go:81] duration metric: took 397.877259ms for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:15.619127    4457 pod_ready.go:38] duration metric: took 1.200751144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:41:15.619147    4457 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 18:41:15.619211    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:41:15.631548    4457 system_svc.go:56] duration metric: took 12.399324ms WaitForService to wait for kubelet
	I0728 18:41:15.631564    4457 kubeadm.go:582] duration metric: took 23.923383994s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:41:15.631580    4457 node_conditions.go:102] verifying NodePressure condition ...
	I0728 18:41:15.816731    4457 request.go:629] Waited for 185.100002ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0728 18:41:15.816907    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0728 18:41:15.816919    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:15.816931    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:15.816938    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:15.819688    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:15.819706    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:15.819716    4457 round_trippers.go:580]     Audit-Id: d738bc89-2916-4cc6-a013-48e7e7ac584d
	I0728 18:41:15.819722    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:15.819726    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:15.819731    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:15.819738    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:15.819742    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:16 GMT
	I0728 18:41:15.820149    4457 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"514"},"items":[{"metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9145 chars]
	I0728 18:41:15.820524    4457 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:41:15.820536    4457 node_conditions.go:123] node cpu capacity is 2
	I0728 18:41:15.820544    4457 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:41:15.820548    4457 node_conditions.go:123] node cpu capacity is 2
	I0728 18:41:15.820554    4457 node_conditions.go:105] duration metric: took 188.973464ms to run NodePressure ...
	I0728 18:41:15.820563    4457 start.go:241] waiting for startup goroutines ...
	I0728 18:41:15.820589    4457 start.go:255] writing updated cluster config ...
	I0728 18:41:15.821417    4457 ssh_runner.go:195] Run: rm -f paused
	I0728 18:41:15.861833    4457 start.go:600] kubectl: 1.29.2, cluster: 1.30.3 (minor skew: 1)
	I0728 18:41:15.937418    4457 out.go:177] * Done! kubectl is now configured to use "multinode-362000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.459296259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.459872035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.460083053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.460133525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.460257304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:40:25 multinode-362000 cri-dockerd[1171]: time="2024-07-29T01:40:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/28cbce0c6ed98e9c955fd2ad47b80253eef5c1d27aa60477f2b7c450ebe28396/resolv.conf as [nameserver 192.169.0.1]"
	Jul 29 01:40:25 multinode-362000 cri-dockerd[1171]: time="2024-07-29T01:40:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/de282e66d4c0558a185d2943edde7cc6d15f7c8e33b53206d011dc03e8998611/resolv.conf as [nameserver 192.169.0.1]"
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.627023969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.627173932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.627311883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.628582602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.666192284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.666339609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.666396957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.667447445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:41:17 multinode-362000 dockerd[1280]: time="2024-07-29T01:41:17.011444643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 01:41:17 multinode-362000 dockerd[1280]: time="2024-07-29T01:41:17.011504420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 01:41:17 multinode-362000 dockerd[1280]: time="2024-07-29T01:41:17.011513820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:41:17 multinode-362000 dockerd[1280]: time="2024-07-29T01:41:17.012153566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:41:17 multinode-362000 cri-dockerd[1171]: time="2024-07-29T01:41:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9e1e93dc724260e39b5f122928824d04094fd5f45fd8acdcd5a10bf238cc3411/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 29 01:41:18 multinode-362000 cri-dockerd[1171]: time="2024-07-29T01:41:18Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 29 01:41:18 multinode-362000 dockerd[1280]: time="2024-07-29T01:41:18.469182532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 01:41:18 multinode-362000 dockerd[1280]: time="2024-07-29T01:41:18.469226256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 01:41:18 multinode-362000 dockerd[1280]: time="2024-07-29T01:41:18.469238850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:41:18 multinode-362000 dockerd[1280]: time="2024-07-29T01:41:18.469344356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fe2daed37b2f7       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   9e1e93dc72426       busybox-fc5497c4f-8hq8g
	4e01b33bc28ce       cbb01a7bd410d                                                                                         2 minutes ago        Running             coredns                   0                   de282e66d4c05       coredns-7db6d8ff4d-8npcw
	1255904b9cda9       6e38f40d628db                                                                                         2 minutes ago        Running             storage-provisioner       0                   28cbce0c6ed98       storage-provisioner
	a44317c7df722       kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a              2 minutes ago        Running             kindnet-cni               0                   a8dcd682eb598       kindnet-4mw5v
	473044afd6a20       55bb025d2cfa5                                                                                         2 minutes ago        Running             kube-proxy                0                   3050e483a8a8d       kube-proxy-tz5h5
	898c4f8b22692       76932a3b37d7e                                                                                         2 minutes ago        Running             kube-controller-manager   0                   c5e0cac22c053       kube-controller-manager-multinode-362000
	f4075b746de31       1f6d574d502f3                                                                                         2 minutes ago        Running             kube-apiserver            0                   1e7d4787a9c38       kube-apiserver-multinode-362000
	ef990ab76809a       3edc18e7b7672                                                                                         2 minutes ago        Running             kube-scheduler            0                   9bd37faa2f0ae       kube-scheduler-multinode-362000
	e54a6e4f589e1       3861cfcd7c04c                                                                                         2 minutes ago        Running             etcd                      0                   9ebd1495f3898       etcd-multinode-362000
	
	
	==> coredns [4e01b33bc28c] <==
	[INFO] 10.244.0.3:35329 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053956s
	[INFO] 10.244.1.2:42551 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000064935s
	[INFO] 10.244.1.2:37359 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000065134s
	[INFO] 10.244.1.2:58343 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075262s
	[INFO] 10.244.1.2:49050 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090366s
	[INFO] 10.244.1.2:53653 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000107571s
	[INFO] 10.244.1.2:56614 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107796s
	[INFO] 10.244.1.2:36768 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092239s
	[INFO] 10.244.1.2:47351 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105143s
	[INFO] 10.244.0.3:57350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000085706s
	[INFO] 10.244.0.3:38330 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000035689s
	[INFO] 10.244.0.3:34046 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005355s
	[INFO] 10.244.0.3:37101 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083044s
	[INFO] 10.244.1.2:35916 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149042s
	[INFO] 10.244.1.2:52331 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100403s
	[INFO] 10.244.1.2:59376 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110433s
	[INFO] 10.244.1.2:54731 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089837s
	[INFO] 10.244.0.3:55981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000054156s
	[INFO] 10.244.0.3:52651 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064795s
	[INFO] 10.244.0.3:44319 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000045378s
	[INFO] 10.244.0.3:47078 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00004451s
	[INFO] 10.244.1.2:41717 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100439s
	[INFO] 10.244.1.2:48492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113445s
	[INFO] 10.244.1.2:34934 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060259s
	[INFO] 10.244.1.2:39620 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000143004s
	
	
	==> describe nodes <==
	Name:               multinode-362000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-362000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=multinode-362000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_28T18_39_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:39:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-362000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:42:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:41:28 +0000   Mon, 29 Jul 2024 01:39:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:41:28 +0000   Mon, 29 Jul 2024 01:39:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:41:28 +0000   Mon, 29 Jul 2024 01:39:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:41:28 +0000   Mon, 29 Jul 2024 01:40:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-362000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b0deb4a701e49b1b84599ec1f9f7e3e
	  System UUID:                81224f45-0000-0000-b808-288a2b40595b
	  Boot ID:                    96400dcc-d649-4a6a-b0b3-add8d75e0274
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8hq8g                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 coredns-7db6d8ff4d-8npcw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m29s
	  kube-system                 etcd-multinode-362000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m43s
	  kube-system                 kindnet-4mw5v                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m30s
	  kube-system                 kube-apiserver-multinode-362000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 kube-controller-manager-multinode-362000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 kube-proxy-tz5h5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 kube-scheduler-multinode-362000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m27s  kube-proxy       
	  Normal  Starting                 2m43s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m43s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m43s  kubelet          Node multinode-362000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m43s  kubelet          Node multinode-362000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m43s  kubelet          Node multinode-362000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m30s  node-controller  Node multinode-362000 event: Registered Node multinode-362000 in Controller
	  Normal  NodeReady                2m14s  kubelet          Node multinode-362000 status is now: NodeReady
	
	
	Name:               multinode-362000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-362000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=multinode-362000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_28T18_40_51_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:40:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-362000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:42:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:41:21 +0000   Mon, 29 Jul 2024 01:40:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:41:21 +0000   Mon, 29 Jul 2024 01:40:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:41:21 +0000   Mon, 29 Jul 2024 01:40:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:41:21 +0000   Mon, 29 Jul 2024 01:41:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.14
	  Hostname:    multinode-362000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 ff3e93e24af54aa0951fd8bce080e314
	  System UUID:                80374d1a-0000-0000-bdda-22c83e05ebd1
	  Boot ID:                    79f99fe7-d394-40c3-9dc4-0519f577ae97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-svnlx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kindnet-8hhwv              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      108s
	  kube-system                 kube-proxy-dzz6p           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  109s (x2 over 109s)  kubelet          Node multinode-362000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s (x2 over 109s)  kubelet          Node multinode-362000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s (x2 over 109s)  kubelet          Node multinode-362000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  109s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           105s                 node-controller  Node multinode-362000-m02 event: Registered Node multinode-362000-m02 in Controller
	  Normal  NodeReady                86s                  kubelet          Node multinode-362000-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.320796] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.324011] systemd-fstab-generator[497]: Ignoring "noauto" option for root device
	[  +0.106079] systemd-fstab-generator[510]: Ignoring "noauto" option for root device
	[  +1.734505] systemd-fstab-generator[847]: Ignoring "noauto" option for root device
	[  +0.254464] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.101763] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +0.126307] systemd-fstab-generator[910]: Ignoring "noauto" option for root device
	[  +2.147515] kauditd_printk_skb: 161 callbacks suppressed
	[  +0.258303] systemd-fstab-generator[1124]: Ignoring "noauto" option for root device
	[  +0.101422] systemd-fstab-generator[1136]: Ignoring "noauto" option for root device
	[  +0.099797] systemd-fstab-generator[1148]: Ignoring "noauto" option for root device
	[  +0.130725] systemd-fstab-generator[1163]: Ignoring "noauto" option for root device
	[  +3.696971] systemd-fstab-generator[1266]: Ignoring "noauto" option for root device
	[  +2.228757] kauditd_printk_skb: 136 callbacks suppressed
	[  +0.343454] systemd-fstab-generator[1515]: Ignoring "noauto" option for root device
	[  +4.314486] systemd-fstab-generator[1701]: Ignoring "noauto" option for root device
	[  +0.386446] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.152718] systemd-fstab-generator[2104]: Ignoring "noauto" option for root device
	[  +0.081840] kauditd_printk_skb: 40 callbacks suppressed
	[Jul29 01:40] systemd-fstab-generator[2300]: Ignoring "noauto" option for root device
	[  +0.096619] kauditd_printk_skb: 12 callbacks suppressed
	[ +14.761772] kauditd_printk_skb: 60 callbacks suppressed
	[Jul29 01:41] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [e54a6e4f589e] <==
	{"level":"info","ts":"2024-07-29T01:39:52.172316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2024-07-29T01:39:52.172398Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-07-29T01:39:52.172634Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T01:39:52.172915Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e0290fa3161c5471","initial-advertise-peer-urls":["https://192.169.0.13:2380"],"listen-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T01:39:52.173008Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T01:39:52.175373Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-07-29T01:39:52.175411Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-07-29T01:39:52.605978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T01:39:52.606026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T01:39:52.60606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2024-07-29T01:39:52.606096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T01:39:52.606104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-07-29T01:39:52.606111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T01:39:52.606117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-07-29T01:39:52.611542Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-362000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T01:39:52.6118Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T01:39:52.616009Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T01:39:52.618374Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:39:52.622344Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T01:39:52.622402Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T01:39:52.623812Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T01:39:52.624929Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-07-29T01:39:52.624972Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:39:52.627332Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:39:52.62747Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 01:42:39 up 3 min,  0 users,  load average: 0.28, 0.22, 0.09
	Linux multinode-362000 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a44317c7df72] <==
	I0729 01:41:34.889201       1 main.go:299] handling current node
	I0729 01:41:44.894327       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:41:44.894427       1 main.go:299] handling current node
	I0729 01:41:44.894478       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:41:44.894499       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:41:54.890539       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:41:54.890564       1 main.go:299] handling current node
	I0729 01:41:54.890578       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:41:54.890583       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:42:04.885531       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:42:04.885603       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:42:04.885917       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:42:04.886044       1 main.go:299] handling current node
	I0729 01:42:14.885642       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:42:14.885721       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:42:14.886206       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:42:14.886227       1 main.go:299] handling current node
	I0729 01:42:24.887758       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:42:24.887826       1 main.go:299] handling current node
	I0729 01:42:24.887845       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:42:24.887854       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:42:34.895434       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:42:34.895488       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:42:34.895786       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:42:34.895828       1 main.go:299] handling current node
	
	
	==> kube-apiserver [f4075b746de3] <==
	I0729 01:39:55.027867       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0729 01:39:55.030632       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0729 01:39:55.031107       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 01:39:55.330517       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 01:39:55.358329       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 01:39:55.475281       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0729 01:39:55.479845       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0729 01:39:55.480523       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 01:39:55.483264       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 01:39:56.059443       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 01:39:56.382419       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 01:39:56.389290       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 01:39:56.394905       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 01:40:09.714656       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0729 01:40:10.014240       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0729 01:41:19.760754       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52605: use of closed network connection
	E0729 01:41:19.949242       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52607: use of closed network connection
	E0729 01:41:20.133177       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52609: use of closed network connection
	E0729 01:41:20.310477       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52611: use of closed network connection
	E0729 01:41:20.496662       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52613: use of closed network connection
	E0729 01:41:20.688244       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52615: use of closed network connection
	E0729 01:41:21.004331       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52618: use of closed network connection
	E0729 01:41:21.187855       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52620: use of closed network connection
	E0729 01:41:21.377063       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52622: use of closed network connection
	E0729 01:41:21.554865       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52624: use of closed network connection
	
	
	==> kube-controller-manager [898c4f8b2269] <==
	I0729 01:40:09.978740       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 01:40:10.434675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="418.130162ms"
	I0729 01:40:10.443618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="8.69659ms"
	I0729 01:40:10.443770       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.711µs"
	I0729 01:40:11.018935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.436686ms"
	I0729 01:40:11.027101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="8.124535ms"
	I0729 01:40:11.027181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.955µs"
	I0729 01:40:25.080337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="144.077µs"
	I0729 01:40:25.091162       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.818µs"
	I0729 01:40:26.585034       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.036µs"
	I0729 01:40:26.604104       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="7.022661ms"
	I0729 01:40:26.604164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.335µs"
	I0729 01:40:29.266767       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0729 01:40:51.188661       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-362000-m02\" does not exist"
	I0729 01:40:51.198306       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-362000-m02" podCIDRs=["10.244.1.0/24"]
	I0729 01:40:54.270525       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-362000-m02"
	I0729 01:41:14.160112       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-362000-m02"
	I0729 01:41:16.670352       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.140966ms"
	I0729 01:41:16.689017       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.156378ms"
	I0729 01:41:16.689239       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.248µs"
	I0729 01:41:16.690375       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.154µs"
	I0729 01:41:18.880601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.490626ms"
	I0729 01:41:18.880810       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.371µs"
	I0729 01:41:19.267756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.930765ms"
	I0729 01:41:19.267954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.527µs"
	
	
	==> kube-proxy [473044afd6a2] <==
	I0729 01:40:11.348502       1 server_linux.go:69] "Using iptables proxy"
	I0729 01:40:11.365653       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0729 01:40:11.402559       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 01:40:11.402601       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 01:40:11.402613       1 server_linux.go:165] "Using iptables Proxier"
	I0729 01:40:11.404701       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 01:40:11.404918       1 server.go:872] "Version info" version="v1.30.3"
	I0729 01:40:11.404927       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:40:11.405549       1 config.go:192] "Starting service config controller"
	I0729 01:40:11.405561       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 01:40:11.405574       1 config.go:101] "Starting endpoint slice config controller"
	I0729 01:40:11.405577       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 01:40:11.406068       1 config.go:319] "Starting node config controller"
	I0729 01:40:11.406074       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 01:40:11.505886       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 01:40:11.506110       1 shared_informer.go:320] Caches are synced for service config
	I0729 01:40:11.506263       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ef990ab76809] <==
	W0729 01:39:54.313459       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 01:39:54.313555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 01:39:54.313606       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 01:39:54.313700       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 01:39:54.319482       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 01:39:54.319640       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 01:39:54.320028       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 01:39:54.320142       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 01:39:54.320265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 01:39:54.320317       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 01:39:54.320410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 01:39:54.320468       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 01:39:54.320533       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 01:39:54.320584       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 01:39:54.326412       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 01:39:54.326519       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 01:39:54.326657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 01:39:54.326710       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 01:39:54.326731       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 01:39:54.326795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 01:39:55.161836       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 01:39:55.161876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 01:39:55.228811       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 01:39:55.228993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0729 01:39:55.708397       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 01:40:16 multinode-362000 kubelet[2112]: I0729 01:40:16.266374    2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4mw5v" podStartSLOduration=4.053341838 podStartE2EDuration="7.266360831s" podCreationTimestamp="2024-07-29 01:40:09 +0000 UTC" firstStartedPulling="2024-07-29 01:40:10.912699498 +0000 UTC m=+14.768334322" lastFinishedPulling="2024-07-29 01:40:14.125718491 +0000 UTC m=+17.981353315" observedRunningTime="2024-07-29 01:40:14.399483421 +0000 UTC m=+18.255118244" watchObservedRunningTime="2024-07-29 01:40:16.266360831 +0000 UTC m=+20.121995659"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.062085    2112 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.078175    2112 topology_manager.go:215] "Topology Admit Handler" podUID="a0fcbb6f-1182-4d9e-bc04-456f1b4de1db" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8npcw"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.079796    2112 topology_manager.go:215] "Topology Admit Handler" podUID="9032906f-5102-4224-b894-d541cf7d67e7" podNamespace="kube-system" podName="storage-provisioner"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.197585    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj8xr\" (UniqueName: \"kubernetes.io/projected/a0fcbb6f-1182-4d9e-bc04-456f1b4de1db-kube-api-access-sj8xr\") pod \"coredns-7db6d8ff4d-8npcw\" (UID: \"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db\") " pod="kube-system/coredns-7db6d8ff4d-8npcw"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.197676    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9032906f-5102-4224-b894-d541cf7d67e7-tmp\") pod \"storage-provisioner\" (UID: \"9032906f-5102-4224-b894-d541cf7d67e7\") " pod="kube-system/storage-provisioner"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.197706    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0fcbb6f-1182-4d9e-bc04-456f1b4de1db-config-volume\") pod \"coredns-7db6d8ff4d-8npcw\" (UID: \"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db\") " pod="kube-system/coredns-7db6d8ff4d-8npcw"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.197732    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpqg6\" (UniqueName: \"kubernetes.io/projected/9032906f-5102-4224-b894-d541cf7d67e7-kube-api-access-gpqg6\") pod \"storage-provisioner\" (UID: \"9032906f-5102-4224-b894-d541cf7d67e7\") " pod="kube-system/storage-provisioner"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.558955    2112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de282e66d4c0558a185d2943edde7cc6d15f7c8e33b53206d011dc03e8998611"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.563464    2112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28cbce0c6ed98e9c955fd2ad47b80253eef5c1d27aa60477f2b7c450ebe28396"
	Jul 29 01:40:26 multinode-362000 kubelet[2112]: I0729 01:40:26.585155    2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8npcw" podStartSLOduration=16.585141404 podStartE2EDuration="16.585141404s" podCreationTimestamp="2024-07-29 01:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 01:40:26.584921308 +0000 UTC m=+30.440556141" watchObservedRunningTime="2024-07-29 01:40:26.585141404 +0000 UTC m=+30.440776232"
	Jul 29 01:40:56 multinode-362000 kubelet[2112]: E0729 01:40:56.268334    2112 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:40:56 multinode-362000 kubelet[2112]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:40:56 multinode-362000 kubelet[2112]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:40:56 multinode-362000 kubelet[2112]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:40:56 multinode-362000 kubelet[2112]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:41:16 multinode-362000 kubelet[2112]: I0729 01:41:16.673625    2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=65.673612713 podStartE2EDuration="1m5.673612713s" podCreationTimestamp="2024-07-29 01:40:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 01:40:26.610162124 +0000 UTC m=+30.465796959" watchObservedRunningTime="2024-07-29 01:41:16.673612713 +0000 UTC m=+80.529247541"
	Jul 29 01:41:16 multinode-362000 kubelet[2112]: I0729 01:41:16.674168    2112 topology_manager.go:215] "Topology Admit Handler" podUID="d1dba4b3-d83f-47fc-beb4-89fb8b5cffa9" podNamespace="default" podName="busybox-fc5497c4f-8hq8g"
	Jul 29 01:41:16 multinode-362000 kubelet[2112]: I0729 01:41:16.765246    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb8zl\" (UniqueName: \"kubernetes.io/projected/d1dba4b3-d83f-47fc-beb4-89fb8b5cffa9-kube-api-access-qb8zl\") pod \"busybox-fc5497c4f-8hq8g\" (UID: \"d1dba4b3-d83f-47fc-beb4-89fb8b5cffa9\") " pod="default/busybox-fc5497c4f-8hq8g"
	Jul 29 01:41:21 multinode-362000 kubelet[2112]: E0729 01:41:21.188294    2112 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40458->127.0.0.1:39093: write tcp 127.0.0.1:40458->127.0.0.1:39093: write: broken pipe
	Jul 29 01:41:56 multinode-362000 kubelet[2112]: E0729 01:41:56.264596    2112 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:41:56 multinode-362000 kubelet[2112]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:41:56 multinode-362000 kubelet[2112]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:41:56 multinode-362000 kubelet[2112]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:41:56 multinode-362000 kubelet[2112]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-362000 -n multinode-362000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-362000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/AddNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/AddNode (79.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (3.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-362000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-362000 status --output json --alsologtostderr: exit status 2 (305.793111ms)

                                                
                                                
-- stdout --
	[{"Name":"multinode-362000","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"multinode-362000-m02","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true},{"Name":"multinode-362000-m03","Host":"Running","Kubelet":"Stopped","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:42:41.055725    4582 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:42:41.055896    4582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:42:41.055901    4582 out.go:304] Setting ErrFile to fd 2...
	I0728 18:42:41.055905    4582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:42:41.056070    4582 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 18:42:41.056257    4582 out.go:298] Setting JSON to true
	I0728 18:42:41.056281    4582 mustload.go:65] Loading cluster: multinode-362000
	I0728 18:42:41.056316    4582 notify.go:220] Checking for updates...
	I0728 18:42:41.056586    4582 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:42:41.056602    4582 status.go:255] checking status of multinode-362000 ...
	I0728 18:42:41.057025    4582 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:42:41.057093    4582 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:42:41.066142    4582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52688
	I0728 18:42:41.066480    4582 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:42:41.066901    4582 main.go:141] libmachine: Using API Version  1
	I0728 18:42:41.066918    4582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:42:41.067118    4582 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:42:41.067231    4582 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:42:41.067314    4582 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:42:41.067380    4582 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:42:41.068316    4582 status.go:330] multinode-362000 host status = "Running" (err=<nil>)
	I0728 18:42:41.068335    4582 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:42:41.068567    4582 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:42:41.068590    4582 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:42:41.077045    4582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52690
	I0728 18:42:41.077380    4582 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:42:41.077691    4582 main.go:141] libmachine: Using API Version  1
	I0728 18:42:41.077699    4582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:42:41.077932    4582 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:42:41.078039    4582 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:42:41.078120    4582 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:42:41.078365    4582 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:42:41.078391    4582 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:42:41.086987    4582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52692
	I0728 18:42:41.087319    4582 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:42:41.087647    4582 main.go:141] libmachine: Using API Version  1
	I0728 18:42:41.087663    4582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:42:41.087865    4582 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:42:41.087965    4582 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:42:41.088119    4582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 18:42:41.088146    4582 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:42:41.088221    4582 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:42:41.088337    4582 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:42:41.088417    4582 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:42:41.088493    4582 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:42:41.116865    4582 ssh_runner.go:195] Run: systemctl --version
	I0728 18:42:41.121211    4582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:42:41.132549    4582 kubeconfig.go:125] found "multinode-362000" server: "https://192.169.0.13:8443"
	I0728 18:42:41.132575    4582 api_server.go:166] Checking apiserver status ...
	I0728 18:42:41.132612    4582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:42:41.143663    4582 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2038/cgroup
	W0728 18:42:41.150775    4582 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2038/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0728 18:42:41.150818    4582 ssh_runner.go:195] Run: ls
	I0728 18:42:41.153947    4582 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:42:41.156949    4582 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0728 18:42:41.156960    4582 status.go:422] multinode-362000 apiserver status = Running (err=<nil>)
	I0728 18:42:41.156969    4582 status.go:257] multinode-362000 status: &{Name:multinode-362000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 18:42:41.156991    4582 status.go:255] checking status of multinode-362000-m02 ...
	I0728 18:42:41.157271    4582 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:42:41.157292    4582 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:42:41.166020    4582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52696
	I0728 18:42:41.166352    4582 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:42:41.166702    4582 main.go:141] libmachine: Using API Version  1
	I0728 18:42:41.166719    4582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:42:41.166930    4582 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:42:41.167037    4582 main.go:141] libmachine: (multinode-362000-m02) Calling .GetState
	I0728 18:42:41.167118    4582 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:42:41.167222    4582 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:42:41.168212    4582 status.go:330] multinode-362000-m02 host status = "Running" (err=<nil>)
	I0728 18:42:41.168222    4582 host.go:66] Checking if "multinode-362000-m02" exists ...
	I0728 18:42:41.168486    4582 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:42:41.168511    4582 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:42:41.177140    4582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52698
	I0728 18:42:41.177467    4582 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:42:41.177784    4582 main.go:141] libmachine: Using API Version  1
	I0728 18:42:41.177800    4582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:42:41.178010    4582 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:42:41.178122    4582 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:42:41.178201    4582 host.go:66] Checking if "multinode-362000-m02" exists ...
	I0728 18:42:41.178466    4582 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:42:41.178511    4582 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:42:41.187055    4582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52700
	I0728 18:42:41.187384    4582 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:42:41.187716    4582 main.go:141] libmachine: Using API Version  1
	I0728 18:42:41.187733    4582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:42:41.187918    4582 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:42:41.188027    4582 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:42:41.188148    4582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 18:42:41.188158    4582 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:42:41.188238    4582 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:42:41.188312    4582 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:42:41.188395    4582 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:42:41.188477    4582 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:42:41.216324    4582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:42:41.230806    4582 status.go:257] multinode-362000-m02 status: &{Name:multinode-362000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0728 18:42:41.230833    4582 status.go:255] checking status of multinode-362000-m03 ...
	I0728 18:42:41.231156    4582 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:42:41.231180    4582 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:42:41.239604    4582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52703
	I0728 18:42:41.239927    4582 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:42:41.240248    4582 main.go:141] libmachine: Using API Version  1
	I0728 18:42:41.240259    4582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:42:41.240475    4582 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:42:41.240594    4582 main.go:141] libmachine: (multinode-362000-m03) Calling .GetState
	I0728 18:42:41.240673    4582 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:42:41.240767    4582 main.go:141] libmachine: (multinode-362000-m03) DBG | hyperkit pid from json: 4551
	I0728 18:42:41.241744    4582 status.go:330] multinode-362000-m03 host status = "Running" (err=<nil>)
	I0728 18:42:41.241754    4582 host.go:66] Checking if "multinode-362000-m03" exists ...
	I0728 18:42:41.242014    4582 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:42:41.242038    4582 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:42:41.250393    4582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52705
	I0728 18:42:41.250732    4582 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:42:41.251082    4582 main.go:141] libmachine: Using API Version  1
	I0728 18:42:41.251100    4582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:42:41.251318    4582 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:42:41.251433    4582 main.go:141] libmachine: (multinode-362000-m03) Calling .GetIP
	I0728 18:42:41.251514    4582 host.go:66] Checking if "multinode-362000-m03" exists ...
	I0728 18:42:41.251773    4582 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:42:41.251801    4582 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:42:41.260082    4582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52707
	I0728 18:42:41.260424    4582 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:42:41.260782    4582 main.go:141] libmachine: Using API Version  1
	I0728 18:42:41.260802    4582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:42:41.260994    4582 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:42:41.261108    4582 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:42:41.261229    4582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 18:42:41.261239    4582 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:42:41.261328    4582 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:42:41.261400    4582 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:42:41.261516    4582 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:42:41.261591    4582 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/id_rsa Username:docker}
	I0728 18:42:41.295212    4582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:42:41.306522    4582 status.go:257] multinode-362000-m03 status: &{Name:multinode-362000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:186: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-362000 status --output json --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-362000 -n multinode-362000
helpers_test.go:244: <<< TestMultiNode/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-362000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-362000 logs -n 25: (2.093483286s)
helpers_test.go:252: TestMultiNode/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p json-output-error-327000                       | json-output-error-327000 | jenkins | v1.33.1 | 28 Jul 24 18:35 PDT | 28 Jul 24 18:35 PDT |
	| start   | -p first-332000                                   | first-332000             | jenkins | v1.33.1 | 28 Jul 24 18:35 PDT | 28 Jul 24 18:36 PDT |
	|         | --driver=hyperkit                                 |                          |         |         |                     |                     |
	| start   | -p second-335000                                  | second-335000            | jenkins | v1.33.1 | 28 Jul 24 18:36 PDT | 28 Jul 24 18:36 PDT |
	|         | --driver=hyperkit                                 |                          |         |         |                     |                     |
	| delete  | -p second-335000                                  | second-335000            | jenkins | v1.33.1 | 28 Jul 24 18:36 PDT | 28 Jul 24 18:36 PDT |
	| delete  | -p first-332000                                   | first-332000             | jenkins | v1.33.1 | 28 Jul 24 18:36 PDT | 28 Jul 24 18:36 PDT |
	| start   | -p mount-start-1-925000                           | mount-start-1-925000     | jenkins | v1.33.1 | 28 Jul 24 18:37 PDT |                     |
	|         | --memory=2048 --mount                             |                          |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                          |         |         |                     |                     |
	|         | 6543 --mount-port 46464                           |                          |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                          |         |         |                     |                     |
	|         | --driver=hyperkit                                 |                          |         |         |                     |                     |
	| delete  | -p mount-start-2-934000                           | mount-start-2-934000     | jenkins | v1.33.1 | 28 Jul 24 18:39 PDT | 28 Jul 24 18:39 PDT |
	| delete  | -p mount-start-1-925000                           | mount-start-1-925000     | jenkins | v1.33.1 | 28 Jul 24 18:39 PDT | 28 Jul 24 18:39 PDT |
	| start   | -p multinode-362000                               | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:39 PDT | 28 Jul 24 18:41 PDT |
	|         | --wait=true --memory=2200                         |                          |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                          |         |         |                     |                     |
	|         | --alsologtostderr                                 |                          |         |         |                     |                     |
	|         | --driver=hyperkit                                 |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- apply -f                   | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- rollout                    | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | status deployment/busybox                         |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- get pods -o                | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- get pods -o                | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-8hq8g --                        |                          |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-svnlx --                        |                          |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-8hq8g --                        |                          |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-svnlx --                        |                          |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-8hq8g -- nslookup               |                          |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-svnlx -- nslookup               |                          |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- get pods -o                | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-8hq8g                           |                          |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                          |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                          |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-8hq8g -- sh                     |                          |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1                          |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-svnlx                           |                          |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                          |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                          |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                          |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-svnlx -- sh                     |                          |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1                          |                          |         |         |                     |                     |
	| node    | add -p multinode-362000 -v 3                      | multinode-362000         | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT |                     |
	|         | --alsologtostderr                                 |                          |         |         |                     |                     |
	|---------|---------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/28 18:39:22
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 18:39:22.678257    4457 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:39:22.678427    4457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:39:22.678433    4457 out.go:304] Setting ErrFile to fd 2...
	I0728 18:39:22.678437    4457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:39:22.678623    4457 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 18:39:22.680060    4457 out.go:298] Setting JSON to false
	I0728 18:39:22.702282    4457 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4133,"bootTime":1722213029,"procs":426,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 18:39:22.702372    4457 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:39:22.725628    4457 out.go:177] * [multinode-362000] minikube v1.33.1 on Darwin 14.5
	I0728 18:39:22.766545    4457 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:39:22.766600    4457 notify.go:220] Checking for updates...
	I0728 18:39:22.809590    4457 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:39:22.830413    4457 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 18:39:22.851674    4457 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:39:22.872676    4457 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 18:39:22.893395    4457 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:39:22.914569    4457 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:39:22.943475    4457 out.go:177] * Using the hyperkit driver based on user configuration
	I0728 18:39:22.985625    4457 start.go:297] selected driver: hyperkit
	I0728 18:39:22.985654    4457 start.go:901] validating driver "hyperkit" against <nil>
	I0728 18:39:22.985674    4457 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:39:22.990010    4457 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:39:22.990130    4457 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 18:39:22.998308    4457 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 18:39:23.002111    4457 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:39:23.002130    4457 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 18:39:23.002159    4457 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:39:23.002374    4457 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:39:23.002401    4457 cni.go:84] Creating CNI manager for ""
	I0728 18:39:23.002410    4457 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0728 18:39:23.002415    4457 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0728 18:39:23.002489    4457 start.go:340] cluster config:
	{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:39:23.002577    4457 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:39:23.044533    4457 out.go:177] * Starting "multinode-362000" primary control-plane node in "multinode-362000" cluster
	I0728 18:39:23.065376    4457 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:39:23.065471    4457 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 18:39:23.065514    4457 cache.go:56] Caching tarball of preloaded images
	I0728 18:39:23.065727    4457 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 18:39:23.065745    4457 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:39:23.066249    4457 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:39:23.066294    4457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json: {Name:mk76e134289e3e0202375db08bfa8f62ca33bf04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:39:23.066977    4457 start.go:360] acquireMachinesLock for multinode-362000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:39:23.067102    4457 start.go:364] duration metric: took 102.049µs to acquireMachinesLock for "multinode-362000"
	I0728 18:39:23.067147    4457 start.go:93] Provisioning new machine with config: &{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:39:23.067240    4457 start.go:125] createHost starting for "" (driver="hyperkit")
	I0728 18:39:23.109453    4457 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:39:23.109706    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:39:23.109768    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:39:23.119748    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52515
	I0728 18:39:23.120088    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:39:23.120492    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:39:23.120503    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:39:23.120705    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:39:23.120831    4457 main.go:141] libmachine: (multinode-362000) Calling .GetMachineName
	I0728 18:39:23.120933    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:23.121051    4457 start.go:159] libmachine.API.Create for "multinode-362000" (driver="hyperkit")
	I0728 18:39:23.121074    4457 client.go:168] LocalClient.Create starting
	I0728 18:39:23.121106    4457 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem
	I0728 18:39:23.121154    4457 main.go:141] libmachine: Decoding PEM data...
	I0728 18:39:23.121168    4457 main.go:141] libmachine: Parsing certificate...
	I0728 18:39:23.121227    4457 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem
	I0728 18:39:23.121267    4457 main.go:141] libmachine: Decoding PEM data...
	I0728 18:39:23.121279    4457 main.go:141] libmachine: Parsing certificate...
	I0728 18:39:23.121292    4457 main.go:141] libmachine: Running pre-create checks...
	I0728 18:39:23.121299    4457 main.go:141] libmachine: (multinode-362000) Calling .PreCreateCheck
	I0728 18:39:23.121386    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:23.121581    4457 main.go:141] libmachine: (multinode-362000) Calling .GetConfigRaw
	I0728 18:39:23.122075    4457 main.go:141] libmachine: Creating machine...
	I0728 18:39:23.122083    4457 main.go:141] libmachine: (multinode-362000) Calling .Create
	I0728 18:39:23.122160    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:23.122282    4457 main.go:141] libmachine: (multinode-362000) DBG | I0728 18:39:23.122156    4465 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 18:39:23.122334    4457 main.go:141] libmachine: (multinode-362000) Downloading /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1006/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0728 18:39:23.302999    4457 main.go:141] libmachine: (multinode-362000) DBG | I0728 18:39:23.302937    4465 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa...
	I0728 18:39:23.527292    4457 main.go:141] libmachine: (multinode-362000) DBG | I0728 18:39:23.527206    4465 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/multinode-362000.rawdisk...
	I0728 18:39:23.527305    4457 main.go:141] libmachine: (multinode-362000) DBG | Writing magic tar header
	I0728 18:39:23.527315    4457 main.go:141] libmachine: (multinode-362000) DBG | Writing SSH key tar header
	I0728 18:39:23.528110    4457 main.go:141] libmachine: (multinode-362000) DBG | I0728 18:39:23.527999    4465 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000 ...
	I0728 18:39:23.900948    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:23.900977    4457 main.go:141] libmachine: (multinode-362000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/hyperkit.pid
	I0728 18:39:23.901065    4457 main.go:141] libmachine: (multinode-362000) DBG | Using UUID 8122a2e4-0139-4f45-b808-288a2b40595b
	I0728 18:39:24.010965    4457 main.go:141] libmachine: (multinode-362000) DBG | Generated MAC e:8c:86:9:55:cf
	I0728 18:39:24.010982    4457 main.go:141] libmachine: (multinode-362000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000
	I0728 18:39:24.011023    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8122a2e4-0139-4f45-b808-288a2b40595b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a540)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0728 18:39:24.011055    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8122a2e4-0139-4f45-b808-288a2b40595b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a540)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0728 18:39:24.011094    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8122a2e4-0139-4f45-b808-288a2b40595b", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/multinode-362000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/bzimage,/Users/jenkins/minikube-integration/1931
2-1006/.minikube/machines/multinode-362000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"}
	I0728 18:39:24.011203    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8122a2e4-0139-4f45-b808-288a2b40595b -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/multinode-362000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"
	I0728 18:39:24.011235    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 18:39:24.014088    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 DEBUG: hyperkit: Pid is 4468
	I0728 18:39:24.014484    4457 main.go:141] libmachine: (multinode-362000) DBG | Attempt 0
	I0728 18:39:24.014494    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:24.014570    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:39:24.015422    4457 main.go:141] libmachine: (multinode-362000) DBG | Searching for e:8c:86:9:55:cf in /var/db/dhcpd_leases ...
	I0728 18:39:24.015502    4457 main.go:141] libmachine: (multinode-362000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0728 18:39:24.015525    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:39:24.015549    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:39:24.015586    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:39:24.015599    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:39:24.015607    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:39:24.015614    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:39:24.015628    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:39:24.015637    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:39:24.015650    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:39:24.015660    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:39:24.015693    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:39:24.021389    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 18:39:24.069500    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 18:39:24.070254    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:39:24.070278    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:39:24.070299    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:39:24.070313    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:39:24.456576    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 18:39:24.456592    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 18:39:24.571109    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:39:24.571130    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:39:24.571151    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:39:24.571163    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:39:24.572087    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 18:39:24.572106    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 18:39:26.015894    4457 main.go:141] libmachine: (multinode-362000) DBG | Attempt 1
	I0728 18:39:26.015907    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:26.015917    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:39:26.016731    4457 main.go:141] libmachine: (multinode-362000) DBG | Searching for e:8c:86:9:55:cf in /var/db/dhcpd_leases ...
	I0728 18:39:26.016759    4457 main.go:141] libmachine: (multinode-362000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0728 18:39:26.016773    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:39:26.016783    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:39:26.016791    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:39:26.016808    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:39:26.016824    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:39:26.016832    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:39:26.016839    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:39:26.016846    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:39:26.016859    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:39:26.016866    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:39:26.016874    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:39:28.018702    4457 main.go:141] libmachine: (multinode-362000) DBG | Attempt 2
	I0728 18:39:28.018721    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:28.018814    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:39:28.019707    4457 main.go:141] libmachine: (multinode-362000) DBG | Searching for e:8c:86:9:55:cf in /var/db/dhcpd_leases ...
	I0728 18:39:28.019765    4457 main.go:141] libmachine: (multinode-362000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0728 18:39:28.019776    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:39:28.019785    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:39:28.019795    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:39:28.019811    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:39:28.019837    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:39:28.019848    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:39:28.019854    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:39:28.019861    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:39:28.019879    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:39:28.019907    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:39:28.019921    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:39:30.020766    4457 main.go:141] libmachine: (multinode-362000) DBG | Attempt 3
	I0728 18:39:30.020783    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:30.020873    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:39:30.021728    4457 main.go:141] libmachine: (multinode-362000) DBG | Searching for e:8c:86:9:55:cf in /var/db/dhcpd_leases ...
	I0728 18:39:30.021757    4457 main.go:141] libmachine: (multinode-362000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0728 18:39:30.021767    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:39:30.021792    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:39:30.021805    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:39:30.021812    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:39:30.021821    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:39:30.021828    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:39:30.021835    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:39:30.021842    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:39:30.021846    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:39:30.021858    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:39:30.021870    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:39:30.184416    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:30 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0728 18:39:30.184441    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:30 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0728 18:39:30.184449    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:30 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0728 18:39:30.208412    4457 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:39:30 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0728 18:39:32.022767    4457 main.go:141] libmachine: (multinode-362000) DBG | Attempt 4
	I0728 18:39:32.022787    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:32.022839    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:39:32.023669    4457 main.go:141] libmachine: (multinode-362000) DBG | Searching for e:8c:86:9:55:cf in /var/db/dhcpd_leases ...
	I0728 18:39:32.023713    4457 main.go:141] libmachine: (multinode-362000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0728 18:39:32.023724    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:39:32.023755    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:39:32.023766    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:39:32.023784    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:39:32.023792    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:39:32.023799    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:39:32.023807    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:39:32.023813    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:39:32.023819    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:39:32.023826    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:39:32.023833    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:39:34.026024    4457 main.go:141] libmachine: (multinode-362000) DBG | Attempt 5
	I0728 18:39:34.026055    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:34.026205    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:39:34.027793    4457 main.go:141] libmachine: (multinode-362000) DBG | Searching for e:8c:86:9:55:cf in /var/db/dhcpd_leases ...
	I0728 18:39:34.027829    4457 main.go:141] libmachine: (multinode-362000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0728 18:39:34.027848    4457 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:39:34.027863    4457 main.go:141] libmachine: (multinode-362000) DBG | Found match: e:8c:86:9:55:cf
	I0728 18:39:34.027872    4457 main.go:141] libmachine: (multinode-362000) DBG | IP: 192.169.0.13
	I0728 18:39:34.027956    4457 main.go:141] libmachine: (multinode-362000) Calling .GetConfigRaw
	I0728 18:39:34.028709    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:34.028859    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:34.028998    4457 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0728 18:39:34.029008    4457 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:39:34.029129    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:39:34.029211    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:39:34.030175    4457 main.go:141] libmachine: Detecting operating system of created instance...
	I0728 18:39:34.030188    4457 main.go:141] libmachine: Waiting for SSH to be available...
	I0728 18:39:34.030194    4457 main.go:141] libmachine: Getting to WaitForSSH function...
	I0728 18:39:34.030199    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.030290    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:34.030399    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.030492    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.030582    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:34.030718    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:39:34.030906    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:39:34.030922    4457 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0728 18:39:34.086759    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:39:34.086771    4457 main.go:141] libmachine: Detecting the provisioner...
	I0728 18:39:34.086784    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.086905    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:34.087015    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.087110    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.087189    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:34.087310    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:39:34.087445    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:39:34.087453    4457 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0728 18:39:34.135876    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0728 18:39:34.135929    4457 main.go:141] libmachine: found compatible host: buildroot
	I0728 18:39:34.135936    4457 main.go:141] libmachine: Provisioning with buildroot...
	I0728 18:39:34.135942    4457 main.go:141] libmachine: (multinode-362000) Calling .GetMachineName
	I0728 18:39:34.136085    4457 buildroot.go:166] provisioning hostname "multinode-362000"
	I0728 18:39:34.136096    4457 main.go:141] libmachine: (multinode-362000) Calling .GetMachineName
	I0728 18:39:34.136235    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.136338    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:34.136429    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.136531    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.136616    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:34.136734    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:39:34.136915    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:39:34.136923    4457 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-362000 && echo "multinode-362000" | sudo tee /etc/hostname
	I0728 18:39:34.195664    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-362000
	
	I0728 18:39:34.195682    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.195810    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:34.195923    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.196019    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.196115    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:34.196261    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:39:34.196405    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:39:34.196416    4457 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-362000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-362000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-362000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 18:39:34.251934    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:39:34.251961    4457 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 18:39:34.251977    4457 buildroot.go:174] setting up certificates
	I0728 18:39:34.251989    4457 provision.go:84] configureAuth start
	I0728 18:39:34.251997    4457 main.go:141] libmachine: (multinode-362000) Calling .GetMachineName
	I0728 18:39:34.252120    4457 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:39:34.252242    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.252325    4457 provision.go:143] copyHostCerts
	I0728 18:39:34.252364    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:39:34.252423    4457 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 18:39:34.252432    4457 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:39:34.252580    4457 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 18:39:34.252812    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:39:34.252842    4457 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 18:39:34.252846    4457 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:39:34.252979    4457 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 18:39:34.253126    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:39:34.253166    4457 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 18:39:34.253171    4457 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:39:34.253262    4457 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 18:39:34.253416    4457 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.multinode-362000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-362000]
	I0728 18:39:34.351530    4457 provision.go:177] copyRemoteCerts
	I0728 18:39:34.351585    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 18:39:34.351602    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.351753    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:34.351854    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.351936    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:34.352010    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:39:34.383245    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 18:39:34.383314    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 18:39:34.402954    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 18:39:34.403011    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0728 18:39:34.421736    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 18:39:34.421795    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 18:39:34.441501    4457 provision.go:87] duration metric: took 189.502411ms to configureAuth
	I0728 18:39:34.441513    4457 buildroot.go:189] setting minikube options for container-runtime
	I0728 18:39:34.441648    4457 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:39:34.441661    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:34.441790    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.441876    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:34.441969    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.442056    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.442145    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:34.442274    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:39:34.442392    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:39:34.442404    4457 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 18:39:34.493819    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 18:39:34.493833    4457 buildroot.go:70] root file system type: tmpfs
	I0728 18:39:34.493900    4457 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 18:39:34.493913    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.494071    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:34.494176    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.494279    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.494372    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:34.494513    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:39:34.494655    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:39:34.494702    4457 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 18:39:34.554254    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 18:39:34.554276    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:34.554416    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:34.554498    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.554612    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:34.554707    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:34.554839    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:39:34.554983    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:39:34.554996    4457 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 18:39:36.092020    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0728 18:39:36.092036    4457 main.go:141] libmachine: Checking connection to Docker...
	I0728 18:39:36.092043    4457 main.go:141] libmachine: (multinode-362000) Calling .GetURL
	I0728 18:39:36.092183    4457 main.go:141] libmachine: Docker is up and running!
	I0728 18:39:36.092191    4457 main.go:141] libmachine: Reticulating splines...
	I0728 18:39:36.092202    4457 client.go:171] duration metric: took 12.971373461s to LocalClient.Create
	I0728 18:39:36.092222    4457 start.go:167] duration metric: took 12.971429469s to libmachine.API.Create "multinode-362000"
	I0728 18:39:36.092231    4457 start.go:293] postStartSetup for "multinode-362000" (driver="hyperkit")
	I0728 18:39:36.092238    4457 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 18:39:36.092255    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:36.092402    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 18:39:36.092414    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:36.092500    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:36.092597    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:36.092700    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:36.092804    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:39:36.128151    4457 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 18:39:36.138951    4457 command_runner.go:130] > NAME=Buildroot
	I0728 18:39:36.138964    4457 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0728 18:39:36.138977    4457 command_runner.go:130] > ID=buildroot
	I0728 18:39:36.138981    4457 command_runner.go:130] > VERSION_ID=2023.02.9
	I0728 18:39:36.138986    4457 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0728 18:39:36.139066    4457 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 18:39:36.139079    4457 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 18:39:36.139192    4457 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 18:39:36.139381    4457 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 18:39:36.139387    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /etc/ssl/certs/15332.pem
	I0728 18:39:36.139596    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 18:39:36.150034    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:39:36.181496    4457 start.go:296] duration metric: took 89.257928ms for postStartSetup
	I0728 18:39:36.181528    4457 main.go:141] libmachine: (multinode-362000) Calling .GetConfigRaw
	I0728 18:39:36.182156    4457 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:39:36.182315    4457 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:39:36.182682    4457 start.go:128] duration metric: took 13.115685704s to createHost
	I0728 18:39:36.182696    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:36.182783    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:36.182873    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:36.182964    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:36.183052    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:36.183169    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:39:36.183299    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:39:36.183310    4457 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0728 18:39:36.233941    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722217176.431962180
	
	I0728 18:39:36.233954    4457 fix.go:216] guest clock: 1722217176.431962180
	I0728 18:39:36.233959    4457 fix.go:229] Guest: 2024-07-28 18:39:36.43196218 -0700 PDT Remote: 2024-07-28 18:39:36.18269 -0700 PDT m=+13.540401962 (delta=249.27218ms)
	I0728 18:39:36.233976    4457 fix.go:200] guest clock delta is within tolerance: 249.27218ms
	I0728 18:39:36.233981    4457 start.go:83] releasing machines lock for "multinode-362000", held for 13.167128835s
	I0728 18:39:36.233999    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:36.234157    4457 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:39:36.234246    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:36.234536    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:36.234638    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:39:36.234704    4457 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 18:39:36.234729    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:36.234795    4457 ssh_runner.go:195] Run: cat /version.json
	I0728 18:39:36.234808    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:39:36.234813    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:36.234911    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:39:36.234922    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:36.235003    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:39:36.235023    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:36.235108    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:39:36.235124    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:39:36.235177    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:39:36.264413    4457 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0728 18:39:36.264685    4457 ssh_runner.go:195] Run: systemctl --version
	I0728 18:39:36.316672    4457 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0728 18:39:36.316742    4457 command_runner.go:130] > systemd 252 (252)
	I0728 18:39:36.316767    4457 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0728 18:39:36.316889    4457 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0728 18:39:36.321953    4457 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0728 18:39:36.321971    4457 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 18:39:36.322010    4457 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 18:39:36.334160    4457 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0728 18:39:36.334256    4457 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 18:39:36.334266    4457 start.go:495] detecting cgroup driver to use...
	I0728 18:39:36.334357    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:39:36.348900    4457 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0728 18:39:36.349190    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 18:39:36.357441    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 18:39:36.365579    4457 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 18:39:36.365615    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 18:39:36.374041    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:39:36.382588    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 18:39:36.390648    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:39:36.398803    4457 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 18:39:36.407245    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 18:39:36.415394    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 18:39:36.423686    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 18:39:36.431845    4457 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 18:39:36.439196    4457 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0728 18:39:36.439273    4457 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 18:39:36.446659    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:39:36.545774    4457 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 18:39:36.564725    4457 start.go:495] detecting cgroup driver to use...
	I0728 18:39:36.564801    4457 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 18:39:36.579874    4457 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0728 18:39:36.579930    4457 command_runner.go:130] > [Unit]
	I0728 18:39:36.579938    4457 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 18:39:36.579957    4457 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 18:39:36.579965    4457 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0728 18:39:36.579969    4457 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0728 18:39:36.579973    4457 command_runner.go:130] > StartLimitBurst=3
	I0728 18:39:36.579977    4457 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 18:39:36.579980    4457 command_runner.go:130] > [Service]
	I0728 18:39:36.579984    4457 command_runner.go:130] > Type=notify
	I0728 18:39:36.579987    4457 command_runner.go:130] > Restart=on-failure
	I0728 18:39:36.579994    4457 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 18:39:36.580003    4457 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 18:39:36.580010    4457 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 18:39:36.580018    4457 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 18:39:36.580025    4457 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 18:39:36.580030    4457 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 18:39:36.580049    4457 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 18:39:36.580060    4457 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 18:39:36.580066    4457 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 18:39:36.580070    4457 command_runner.go:130] > ExecStart=
	I0728 18:39:36.580083    4457 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0728 18:39:36.580089    4457 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 18:39:36.580095    4457 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 18:39:36.580100    4457 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 18:39:36.580104    4457 command_runner.go:130] > LimitNOFILE=infinity
	I0728 18:39:36.580108    4457 command_runner.go:130] > LimitNPROC=infinity
	I0728 18:39:36.580111    4457 command_runner.go:130] > LimitCORE=infinity
	I0728 18:39:36.580115    4457 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 18:39:36.580125    4457 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 18:39:36.580129    4457 command_runner.go:130] > TasksMax=infinity
	I0728 18:39:36.580132    4457 command_runner.go:130] > TimeoutStartSec=0
	I0728 18:39:36.580138    4457 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 18:39:36.580141    4457 command_runner.go:130] > Delegate=yes
	I0728 18:39:36.580146    4457 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 18:39:36.580150    4457 command_runner.go:130] > KillMode=process
	I0728 18:39:36.580153    4457 command_runner.go:130] > [Install]
	I0728 18:39:36.580162    4457 command_runner.go:130] > WantedBy=multi-user.target
	I0728 18:39:36.580233    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:39:36.595157    4457 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 18:39:36.607711    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:39:36.621293    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:39:36.636257    4457 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 18:39:36.654754    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:39:36.665107    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:39:36.679672    4457 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 18:39:36.679980    4457 ssh_runner.go:195] Run: which cri-dockerd
	I0728 18:39:36.682999    4457 command_runner.go:130] > /usr/bin/cri-dockerd
	I0728 18:39:36.683073    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 18:39:36.690292    4457 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 18:39:36.703631    4457 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 18:39:36.798645    4457 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 18:39:36.913608    4457 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 18:39:36.913683    4457 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 18:39:36.928766    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:39:37.023923    4457 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:39:39.303257    4457 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.279358253s)
	I0728 18:39:39.303313    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0728 18:39:39.313662    4457 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0728 18:39:39.326612    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:39:39.337715    4457 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0728 18:39:39.430884    4457 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 18:39:39.532367    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:39:39.628365    4457 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0728 18:39:39.643204    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:39:39.654329    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:39:39.763825    4457 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0728 18:39:39.826299    4457 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 18:39:39.826376    4457 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 18:39:39.830952    4457 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0728 18:39:39.830966    4457 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0728 18:39:39.830971    4457 command_runner.go:130] > Device: 0,22	Inode: 799         Links: 1
	I0728 18:39:39.830976    4457 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0728 18:39:39.830980    4457 command_runner.go:130] > Access: 2024-07-29 01:39:39.975016608 +0000
	I0728 18:39:39.830985    4457 command_runner.go:130] > Modify: 2024-07-29 01:39:39.975016608 +0000
	I0728 18:39:39.830991    4457 command_runner.go:130] > Change: 2024-07-29 01:39:39.977016459 +0000
	I0728 18:39:39.830996    4457 command_runner.go:130] >  Birth: -
	I0728 18:39:39.831121    4457 start.go:563] Will wait 60s for crictl version
	I0728 18:39:39.831186    4457 ssh_runner.go:195] Run: which crictl
	I0728 18:39:39.833906    4457 command_runner.go:130] > /usr/bin/crictl
	I0728 18:39:39.834125    4457 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0728 18:39:39.868813    4457 command_runner.go:130] > Version:  0.1.0
	I0728 18:39:39.868827    4457 command_runner.go:130] > RuntimeName:  docker
	I0728 18:39:39.868831    4457 command_runner.go:130] > RuntimeVersion:  27.1.0
	I0728 18:39:39.868835    4457 command_runner.go:130] > RuntimeApiVersion:  v1
	I0728 18:39:39.870030    4457 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0728 18:39:39.870104    4457 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:39:39.888179    4457 command_runner.go:130] > 27.1.0
	I0728 18:39:39.888967    4457 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:39:39.904866    4457 command_runner.go:130] > 27.1.0
	I0728 18:39:39.954951    4457 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0728 18:39:39.954998    4457 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:39:39.955386    4457 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0728 18:39:39.960117    4457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:39:39.970920    4457 kubeadm.go:883] updating cluster {Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0728 18:39:39.970989    4457 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:39:39.971053    4457 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 18:39:39.982664    4457 docker.go:685] Got preloaded images: 
	I0728 18:39:39.982687    4457 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0728 18:39:39.982735    4457 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0728 18:39:39.990905    4457 command_runner.go:139] > {"Repositories":{}}
	I0728 18:39:39.991063    4457 ssh_runner.go:195] Run: which lz4
	I0728 18:39:39.993878    4457 command_runner.go:130] > /usr/bin/lz4
	I0728 18:39:39.994003    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0728 18:39:39.994127    4457 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0728 18:39:39.997154    4457 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0728 18:39:39.997231    4457 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0728 18:39:39.997247    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0728 18:39:40.955181    4457 docker.go:649] duration metric: took 961.12418ms to copy over tarball
	I0728 18:39:40.955248    4457 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0728 18:39:43.311904    4457 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.356684706s)
	I0728 18:39:43.311919    4457 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0728 18:39:43.338182    4457 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0728 18:39:43.345890    4457 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0728 18:39:43.345970    4457 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0728 18:39:43.359802    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:39:43.464657    4457 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:39:45.815797    4457 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.351164626s)
	I0728 18:39:45.815906    4457 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 18:39:45.828514    4457 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0728 18:39:45.828528    4457 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0728 18:39:45.828533    4457 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0728 18:39:45.828545    4457 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0728 18:39:45.828549    4457 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0728 18:39:45.828553    4457 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0728 18:39:45.828557    4457 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0728 18:39:45.828561    4457 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:39:45.829169    4457 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0728 18:39:45.829186    4457 cache_images.go:84] Images are preloaded, skipping loading
	I0728 18:39:45.829208    4457 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0728 18:39:45.829285    4457 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-362000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0728 18:39:45.829361    4457 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 18:39:45.865868    4457 command_runner.go:130] > cgroupfs
	I0728 18:39:45.866530    4457 cni.go:84] Creating CNI manager for ""
	I0728 18:39:45.866540    4457 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0728 18:39:45.866550    4457 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0728 18:39:45.866567    4457 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-362000 NodeName:multinode-362000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0728 18:39:45.866659    4457 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-362000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 18:39:45.866716    4457 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0728 18:39:45.874335    4457 command_runner.go:130] > kubeadm
	I0728 18:39:45.874343    4457 command_runner.go:130] > kubectl
	I0728 18:39:45.874346    4457 command_runner.go:130] > kubelet
	I0728 18:39:45.874413    4457 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 18:39:45.874458    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 18:39:45.881783    4457 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0728 18:39:45.895323    4457 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 18:39:45.909095    4457 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0728 18:39:45.922717    4457 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0728 18:39:45.925684    4457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:39:45.935231    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:39:46.028862    4457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:39:46.043464    4457 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000 for IP: 192.169.0.13
	I0728 18:39:46.043477    4457 certs.go:194] generating shared ca certs ...
	I0728 18:39:46.043486    4457 certs.go:226] acquiring lock for ca certs: {Name:mk64aac07da96a39ae6165406ad142fbce2d0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:39:46.043672    4457 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key
	I0728 18:39:46.043747    4457 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key
	I0728 18:39:46.043758    4457 certs.go:256] generating profile certs ...
	I0728 18:39:46.043800    4457 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key
	I0728 18:39:46.043812    4457 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt with IP's: []
	I0728 18:39:46.478407    4457 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt ...
	I0728 18:39:46.478427    4457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt: {Name:mka2aac26f6bb35ea3d4721520c4f39c62d89174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:39:46.478776    4457 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key ...
	I0728 18:39:46.478784    4457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key: {Name:mk7c0f81fa266c66b46f4b0af80e0b57928387bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:39:46.479030    4457 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key.cf2f2b57
	I0728 18:39:46.479046    4457 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt.cf2f2b57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0728 18:39:46.651341    4457 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt.cf2f2b57 ...
	I0728 18:39:46.651356    4457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt.cf2f2b57: {Name:mk093692e36abd7a7afccd1c946f90bc40aad12d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:39:46.651665    4457 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key.cf2f2b57 ...
	I0728 18:39:46.651674    4457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key.cf2f2b57: {Name:mkc9cd932269a62b355966e5b683dd182c98ca39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:39:46.651895    4457 certs.go:381] copying /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt.cf2f2b57 -> /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt
	I0728 18:39:46.652085    4457 certs.go:385] copying /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key.cf2f2b57 -> /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key
	I0728 18:39:46.652272    4457 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.key
	I0728 18:39:46.652288    4457 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.crt with IP's: []
	I0728 18:39:46.815503    4457 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.crt ...
	I0728 18:39:46.815517    4457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.crt: {Name:mkf99da5cbf1447710168bfc4b4f7f7f9d4a5014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:39:46.815842    4457 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.key ...
	I0728 18:39:46.815852    4457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.key: {Name:mk4611170239081f2e211d7d80246aa607ebb9f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:39:46.816095    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0728 18:39:46.816126    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0728 18:39:46.816147    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0728 18:39:46.816169    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0728 18:39:46.816189    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0728 18:39:46.816210    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0728 18:39:46.816249    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0728 18:39:46.816268    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0728 18:39:46.816368    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem (1338 bytes)
	W0728 18:39:46.816423    4457 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533_empty.pem, impossibly tiny 0 bytes
	I0728 18:39:46.816432    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem (1675 bytes)
	I0728 18:39:46.816465    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem (1078 bytes)
	I0728 18:39:46.816496    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem (1123 bytes)
	I0728 18:39:46.816525    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem (1679 bytes)
	I0728 18:39:46.816592    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:39:46.816639    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:39:46.816663    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem -> /usr/share/ca-certificates/1533.pem
	I0728 18:39:46.816682    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /usr/share/ca-certificates/15332.pem
	I0728 18:39:46.817131    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 18:39:46.847289    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 18:39:46.869482    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 18:39:46.891807    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 18:39:46.911672    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0728 18:39:46.931550    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 18:39:46.952093    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 18:39:46.972162    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0728 18:39:46.992027    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 18:39:47.011482    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem --> /usr/share/ca-certificates/1533.pem (1338 bytes)
	I0728 18:39:47.031028    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /usr/share/ca-certificates/15332.pem (1708 bytes)
	I0728 18:39:47.051105    4457 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 18:39:47.065442    4457 ssh_runner.go:195] Run: openssl version
	I0728 18:39:47.069800    4457 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0728 18:39:47.069946    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 18:39:47.078286    4457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:39:47.081634    4457 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:39:47.081777    4457 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:39:47.081812    4457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:39:47.085826    4457 command_runner.go:130] > b5213941
	I0728 18:39:47.086002    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 18:39:47.094361    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1533.pem && ln -fs /usr/share/ca-certificates/1533.pem /etc/ssl/certs/1533.pem"
	I0728 18:39:47.103154    4457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1533.pem
	I0728 18:39:47.106672    4457 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 00:57 /usr/share/ca-certificates/1533.pem
	I0728 18:39:47.106692    4457 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:57 /usr/share/ca-certificates/1533.pem
	I0728 18:39:47.106727    4457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1533.pem
	I0728 18:39:47.110935    4457 command_runner.go:130] > 51391683
	I0728 18:39:47.111129    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1533.pem /etc/ssl/certs/51391683.0"
	I0728 18:39:47.119568    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15332.pem && ln -fs /usr/share/ca-certificates/15332.pem /etc/ssl/certs/15332.pem"
	I0728 18:39:47.128138    4457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15332.pem
	I0728 18:39:47.131687    4457 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 00:57 /usr/share/ca-certificates/15332.pem
	I0728 18:39:47.131763    4457 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:57 /usr/share/ca-certificates/15332.pem
	I0728 18:39:47.131796    4457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15332.pem
	I0728 18:39:47.136089    4457 command_runner.go:130] > 3ec20f2e
	I0728 18:39:47.136126    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15332.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 18:39:47.144736    4457 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0728 18:39:47.148009    4457 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0728 18:39:47.148026    4457 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0728 18:39:47.148069    4457 kubeadm.go:392] StartCluster: {Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:39:47.148163    4457 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 18:39:47.160119    4457 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 18:39:47.167833    4457 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0728 18:39:47.167854    4457 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0728 18:39:47.167859    4457 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0728 18:39:47.167913    4457 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 18:39:47.175408    4457 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 18:39:47.183077    4457 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0728 18:39:47.183091    4457 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0728 18:39:47.183097    4457 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0728 18:39:47.183106    4457 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 18:39:47.183125    4457 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 18:39:47.183131    4457 kubeadm.go:157] found existing configuration files:
	
	I0728 18:39:47.183169    4457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 18:39:47.190436    4457 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0728 18:39:47.190454    4457 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0728 18:39:47.190491    4457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0728 18:39:47.198053    4457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 18:39:47.205431    4457 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0728 18:39:47.205448    4457 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0728 18:39:47.205483    4457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0728 18:39:47.213148    4457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 18:39:47.220352    4457 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0728 18:39:47.220374    4457 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0728 18:39:47.220409    4457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 18:39:47.227976    4457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 18:39:47.235196    4457 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0728 18:39:47.235212    4457 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0728 18:39:47.235245    4457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 18:39:47.242661    4457 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0728 18:39:47.301790    4457 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0728 18:39:47.301802    4457 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0728 18:39:47.301844    4457 kubeadm.go:310] [preflight] Running pre-flight checks
	I0728 18:39:47.301852    4457 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 18:39:47.387758    4457 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0728 18:39:47.387768    4457 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0728 18:39:47.387870    4457 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0728 18:39:47.387880    4457 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0728 18:39:47.387956    4457 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0728 18:39:47.387956    4457 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0728 18:39:47.560153    4457 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0728 18:39:47.560166    4457 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0728 18:39:47.583494    4457 out.go:204]   - Generating certificates and keys ...
	I0728 18:39:47.583553    4457 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0728 18:39:47.583560    4457 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0728 18:39:47.583614    4457 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0728 18:39:47.583620    4457 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0728 18:39:47.767383    4457 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0728 18:39:47.767390    4457 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0728 18:39:47.902927    4457 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0728 18:39:47.902943    4457 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0728 18:39:48.029398    4457 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0728 18:39:48.029416    4457 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0728 18:39:48.230360    4457 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0728 18:39:48.230376    4457 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0728 18:39:48.466250    4457 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0728 18:39:48.466267    4457 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0728 18:39:48.466383    4457 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-362000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0728 18:39:48.466393    4457 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-362000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0728 18:39:48.653665    4457 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0728 18:39:48.653680    4457 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0728 18:39:48.653781    4457 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-362000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0728 18:39:48.653793    4457 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-362000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0728 18:39:48.906060    4457 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0728 18:39:48.906072    4457 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0728 18:39:49.017102    4457 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0728 18:39:49.017115    4457 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0728 18:39:49.099226    4457 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0728 18:39:49.099241    4457 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0728 18:39:49.099370    4457 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0728 18:39:49.099380    4457 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0728 18:39:49.290179    4457 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0728 18:39:49.290193    4457 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0728 18:39:49.662361    4457 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0728 18:39:49.662379    4457 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0728 18:39:49.814296    4457 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0728 18:39:49.814311    4457 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0728 18:39:49.936514    4457 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0728 18:39:49.936530    4457 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0728 18:39:50.223908    4457 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0728 18:39:50.223913    4457 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0728 18:39:50.224262    4457 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0728 18:39:50.224272    4457 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0728 18:39:50.225979    4457 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0728 18:39:50.225995    4457 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0728 18:39:50.247458    4457 out.go:204]   - Booting up control plane ...
	I0728 18:39:50.247537    4457 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0728 18:39:50.247541    4457 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0728 18:39:50.247615    4457 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0728 18:39:50.247622    4457 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0728 18:39:50.247680    4457 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0728 18:39:50.247687    4457 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0728 18:39:50.248416    4457 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0728 18:39:50.248423    4457 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0728 18:39:50.248656    4457 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0728 18:39:50.248662    4457 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0728 18:39:50.248709    4457 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0728 18:39:50.248719    4457 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0728 18:39:50.354370    4457 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0728 18:39:50.354373    4457 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0728 18:39:50.354452    4457 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0728 18:39:50.354459    4457 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0728 18:39:50.862036    4457 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 507.998553ms
	I0728 18:39:50.862052    4457 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 507.998553ms
	I0728 18:39:50.862118    4457 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0728 18:39:50.862132    4457 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0728 18:39:55.360922    4457 kubeadm.go:310] [api-check] The API server is healthy after 4.5017507s
	I0728 18:39:55.360932    4457 command_runner.go:130] > [api-check] The API server is healthy after 4.5017507s
	I0728 18:39:55.372416    4457 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0728 18:39:55.372424    4457 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0728 18:39:55.379262    4457 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0728 18:39:55.379271    4457 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0728 18:39:55.393857    4457 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0728 18:39:55.393872    4457 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0728 18:39:55.394021    4457 kubeadm.go:310] [mark-control-plane] Marking the node multinode-362000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0728 18:39:55.394030    4457 command_runner.go:130] > [mark-control-plane] Marking the node multinode-362000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0728 18:39:55.402932    4457 kubeadm.go:310] [bootstrap-token] Using token: 53nsa7.gvs19q17kvpjmfej
	I0728 18:39:55.402953    4457 command_runner.go:130] > [bootstrap-token] Using token: 53nsa7.gvs19q17kvpjmfej
	I0728 18:39:55.430849    4457 out.go:204]   - Configuring RBAC rules ...
	I0728 18:39:55.431017    4457 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0728 18:39:55.431022    4457 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0728 18:39:55.473819    4457 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0728 18:39:55.473825    4457 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0728 18:39:55.478550    4457 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0728 18:39:55.478567    4457 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0728 18:39:55.480819    4457 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0728 18:39:55.480828    4457 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0728 18:39:55.482589    4457 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0728 18:39:55.482602    4457 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0728 18:39:55.484467    4457 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0728 18:39:55.484467    4457 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0728 18:39:55.769440    4457 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0728 18:39:55.769445    4457 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0728 18:39:56.177834    4457 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0728 18:39:56.177851    4457 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0728 18:39:56.764455    4457 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0728 18:39:56.764469    4457 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0728 18:39:56.765295    4457 kubeadm.go:310] 
	I0728 18:39:56.765354    4457 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0728 18:39:56.765371    4457 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0728 18:39:56.765383    4457 kubeadm.go:310] 
	I0728 18:39:56.765450    4457 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0728 18:39:56.765459    4457 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0728 18:39:56.765463    4457 kubeadm.go:310] 
	I0728 18:39:56.765508    4457 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0728 18:39:56.765516    4457 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0728 18:39:56.765571    4457 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0728 18:39:56.765579    4457 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0728 18:39:56.765626    4457 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0728 18:39:56.765634    4457 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0728 18:39:56.765642    4457 kubeadm.go:310] 
	I0728 18:39:56.765680    4457 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0728 18:39:56.765685    4457 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0728 18:39:56.765697    4457 kubeadm.go:310] 
	I0728 18:39:56.765732    4457 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0728 18:39:56.765737    4457 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0728 18:39:56.765740    4457 kubeadm.go:310] 
	I0728 18:39:56.765774    4457 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0728 18:39:56.765778    4457 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0728 18:39:56.765828    4457 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0728 18:39:56.765832    4457 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0728 18:39:56.765877    4457 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0728 18:39:56.765881    4457 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0728 18:39:56.765884    4457 kubeadm.go:310] 
	I0728 18:39:56.765956    4457 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0728 18:39:56.765967    4457 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0728 18:39:56.766038    4457 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0728 18:39:56.766039    4457 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0728 18:39:56.766048    4457 kubeadm.go:310] 
	I0728 18:39:56.766112    4457 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 53nsa7.gvs19q17kvpjmfej \
	I0728 18:39:56.766118    4457 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 53nsa7.gvs19q17kvpjmfej \
	I0728 18:39:56.766206    4457 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 \
	I0728 18:39:56.766213    4457 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 \
	I0728 18:39:56.766235    4457 command_runner.go:130] > 	--control-plane 
	I0728 18:39:56.766241    4457 kubeadm.go:310] 	--control-plane 
	I0728 18:39:56.766249    4457 kubeadm.go:310] 
	I0728 18:39:56.766316    4457 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0728 18:39:56.766320    4457 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0728 18:39:56.766327    4457 kubeadm.go:310] 
	I0728 18:39:56.766390    4457 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 53nsa7.gvs19q17kvpjmfej \
	I0728 18:39:56.766397    4457 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 53nsa7.gvs19q17kvpjmfej \
	I0728 18:39:56.766481    4457 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 
	I0728 18:39:56.766486    4457 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 
	I0728 18:39:56.767454    4457 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0728 18:39:56.767464    4457 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0728 18:39:56.767521    4457 cni.go:84] Creating CNI manager for ""
	I0728 18:39:56.767527    4457 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0728 18:39:56.794259    4457 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0728 18:39:56.852216    4457 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0728 18:39:56.857699    4457 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0728 18:39:56.857712    4457 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0728 18:39:56.857719    4457 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0728 18:39:56.857724    4457 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0728 18:39:56.857728    4457 command_runner.go:130] > Access: 2024-07-29 01:39:33.652253582 +0000
	I0728 18:39:56.857739    4457 command_runner.go:130] > Modify: 2024-07-23 05:15:32.000000000 +0000
	I0728 18:39:56.857744    4457 command_runner.go:130] > Change: 2024-07-29 01:39:32.205688945 +0000
	I0728 18:39:56.857755    4457 command_runner.go:130] >  Birth: -
	I0728 18:39:56.857898    4457 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0728 18:39:56.857905    4457 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0728 18:39:56.872532    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0728 18:39:57.067489    4457 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0728 18:39:57.071888    4457 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0728 18:39:57.076162    4457 command_runner.go:130] > serviceaccount/kindnet created
	I0728 18:39:57.081626    4457 command_runner.go:130] > daemonset.apps/kindnet created
	I0728 18:39:57.082864    4457 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 18:39:57.082924    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:39:57.082939    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-362000 minikube.k8s.io/updated_at=2024_07_28T18_39_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=multinode-362000 minikube.k8s.io/primary=true
	I0728 18:39:57.141282    4457 command_runner.go:130] > -16
	I0728 18:39:57.141501    4457 ops.go:34] apiserver oom_adj: -16
	I0728 18:39:57.223732    4457 command_runner.go:130] > node/multinode-362000 labeled
	I0728 18:39:57.224678    4457 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0728 18:39:57.224780    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:39:57.286683    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:39:57.727000    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:39:57.789564    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:39:58.224922    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:39:58.288791    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:39:58.726483    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:39:58.786616    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:39:59.224879    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:39:59.283451    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:39:59.725546    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:39:59.783054    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:00.226641    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:00.289133    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:00.724775    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:00.788264    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:01.224763    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:01.289858    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:01.726327    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:01.784894    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:02.224927    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:02.285820    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:02.725994    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:02.785885    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:03.225965    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:03.284414    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:03.724778    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:03.785882    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:04.225389    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:04.285366    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:04.725557    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:04.786398    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:05.226293    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:05.293946    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:05.725634    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:05.785973    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:06.226151    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:06.293162    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:06.725726    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:06.794820    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:07.225166    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:07.286052    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:07.726737    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:07.784693    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:08.224870    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:08.285867    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:08.726546    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:08.785482    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:09.225131    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:09.285788    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:09.725623    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:09.788869    4457 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0728 18:40:10.225058    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:40:10.283013    4457 command_runner.go:130] > NAME      SECRETS   AGE
	I0728 18:40:10.283026    4457 command_runner.go:130] > default   0         0s
	I0728 18:40:10.284131    4457 kubeadm.go:1113] duration metric: took 13.201513533s to wait for elevateKubeSystemPrivileges
	I0728 18:40:10.284150    4457 kubeadm.go:394] duration metric: took 23.136542682s to StartCluster
	I0728 18:40:10.284174    4457 settings.go:142] acquiring lock: {Name:mk9218fe520c81adf28e6207ae402102e10a5d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:40:10.284272    4457 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:40:10.284780    4457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/kubeconfig: {Name:mk76ac5b4283108fca1a66cc5cd0791fbea0691d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:40:10.285026    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 18:40:10.285038    4457 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:40:10.285076    4457 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0728 18:40:10.285121    4457 addons.go:69] Setting storage-provisioner=true in profile "multinode-362000"
	I0728 18:40:10.285133    4457 addons.go:69] Setting default-storageclass=true in profile "multinode-362000"
	I0728 18:40:10.285153    4457 addons.go:234] Setting addon storage-provisioner=true in "multinode-362000"
	I0728 18:40:10.285172    4457 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:40:10.309576    4457 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-362000"
	I0728 18:40:10.309786    4457 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:40:10.309950    4457 out.go:177] * Verifying Kubernetes components...
	I0728 18:40:10.310638    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:10.310672    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:10.310955    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:10.310988    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:10.320224    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52539
	I0728 18:40:10.320256    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52538
	I0728 18:40:10.320612    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:10.320645    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:10.320974    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:10.320984    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:10.320996    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:10.321036    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:10.321199    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:10.321249    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:10.321379    4457 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:40:10.321476    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:10.321562    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:40:10.321610    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:10.321636    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:10.323925    4457 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:40:10.324234    4457 kapi.go:59] client config for multinode-362000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6df5b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:40:10.324779    4457 cert_rotation.go:137] Starting client certificate rotation controller
	I0728 18:40:10.324964    4457 addons.go:234] Setting addon default-storageclass=true in "multinode-362000"
	I0728 18:40:10.324992    4457 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:40:10.325245    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:10.325277    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:10.330641    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52542
	I0728 18:40:10.331008    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:10.331360    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:10.331375    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:10.331589    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:10.331768    4457 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:40:10.332140    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:10.332178    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:40:10.333069    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:40:10.334227    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52544
	I0728 18:40:10.352904    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:40:10.353246    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:10.353698    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:10.353710    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:10.353959    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:10.354421    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:10.354445    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:10.363396    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52546
	I0728 18:40:10.363753    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:10.364112    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:10.364129    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:10.364390    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:10.364525    4457 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:40:10.364628    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:10.364708    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:40:10.374788    4457 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:40:10.375201    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:40:10.375451    4457 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 18:40:10.375461    4457 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 18:40:10.375477    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:40:10.375580    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:40:10.375677    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:40:10.375768    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:40:10.375851    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:40:10.395728    4457 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 18:40:10.395747    4457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 18:40:10.395792    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:40:10.395957    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:40:10.396043    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:40:10.396125    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:40:10.396226    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:40:10.401627    4457 command_runner.go:130] > apiVersion: v1
	I0728 18:40:10.401639    4457 command_runner.go:130] > data:
	I0728 18:40:10.401643    4457 command_runner.go:130] >   Corefile: |
	I0728 18:40:10.401645    4457 command_runner.go:130] >     .:53 {
	I0728 18:40:10.401649    4457 command_runner.go:130] >         errors
	I0728 18:40:10.401652    4457 command_runner.go:130] >         health {
	I0728 18:40:10.401658    4457 command_runner.go:130] >            lameduck 5s
	I0728 18:40:10.401662    4457 command_runner.go:130] >         }
	I0728 18:40:10.401666    4457 command_runner.go:130] >         ready
	I0728 18:40:10.401673    4457 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0728 18:40:10.401677    4457 command_runner.go:130] >            pods insecure
	I0728 18:40:10.401681    4457 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0728 18:40:10.401693    4457 command_runner.go:130] >            ttl 30
	I0728 18:40:10.401697    4457 command_runner.go:130] >         }
	I0728 18:40:10.401700    4457 command_runner.go:130] >         prometheus :9153
	I0728 18:40:10.401705    4457 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0728 18:40:10.401709    4457 command_runner.go:130] >            max_concurrent 1000
	I0728 18:40:10.401713    4457 command_runner.go:130] >         }
	I0728 18:40:10.401716    4457 command_runner.go:130] >         cache 30
	I0728 18:40:10.401720    4457 command_runner.go:130] >         loop
	I0728 18:40:10.401723    4457 command_runner.go:130] >         reload
	I0728 18:40:10.401727    4457 command_runner.go:130] >         loadbalance
	I0728 18:40:10.401730    4457 command_runner.go:130] >     }
	I0728 18:40:10.401733    4457 command_runner.go:130] > kind: ConfigMap
	I0728 18:40:10.401737    4457 command_runner.go:130] > metadata:
	I0728 18:40:10.401742    4457 command_runner.go:130] >   creationTimestamp: "2024-07-29T01:39:56Z"
	I0728 18:40:10.401746    4457 command_runner.go:130] >   name: coredns
	I0728 18:40:10.401750    4457 command_runner.go:130] >   namespace: kube-system
	I0728 18:40:10.401753    4457 command_runner.go:130] >   resourceVersion: "229"
	I0728 18:40:10.401757    4457 command_runner.go:130] >   uid: 090d6d0b-6aa9-498f-b5ff-18ee2e948131
	I0728 18:40:10.401847    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0728 18:40:10.517689    4457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:40:10.608660    4457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 18:40:10.613299    4457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 18:40:10.755970    4457 command_runner.go:130] > configmap/coredns replaced
	I0728 18:40:10.758280    4457 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0728 18:40:10.758560    4457 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:40:10.758560    4457 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:40:10.758749    4457 kapi.go:59] client config for multinode-362000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6df5b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:40:10.758752    4457 kapi.go:59] client config for multinode-362000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6df5b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:40:10.758941    4457 node_ready.go:35] waiting up to 6m0s for node "multinode-362000" to be "Ready" ...
	I0728 18:40:10.759003    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:10.759004    4457 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0728 18:40:10.759008    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:10.759011    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:10.759017    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:10.759017    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:10.759021    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:10.759023    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:10.766260    4457 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0728 18:40:10.766273    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:10.766278    4457 round_trippers.go:580]     Audit-Id: 18631da8-eb42-4cf5-8868-257785e0a022
	I0728 18:40:10.766282    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:10.766285    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:10.766288    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:10.766291    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:10.766294    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:10 GMT
	I0728 18:40:10.766588    4457 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0728 18:40:10.766597    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:10.766603    4457 round_trippers.go:580]     Audit-Id: 700a3297-1555-4166-b81d-840902aaebd8
	I0728 18:40:10.766609    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:10.766614    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:10.766617    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:10.766621    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:10.766624    4457 round_trippers.go:580]     Content-Length: 291
	I0728 18:40:10.766628    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:10 GMT
	I0728 18:40:10.766675    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:10.766696    4457 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cdd02524-af69-44e9-9e2c-bfbb6e7d13b2","resourceVersion":"355","creationTimestamp":"2024-07-29T01:39:56Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0728 18:40:10.767124    4457 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cdd02524-af69-44e9-9e2c-bfbb6e7d13b2","resourceVersion":"355","creationTimestamp":"2024-07-29T01:39:56Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0728 18:40:10.767162    4457 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0728 18:40:10.767169    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:10.767176    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:10.767180    4457 round_trippers.go:473]     Content-Type: application/json
	I0728 18:40:10.767185    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:10.772809    4457 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0728 18:40:10.772827    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:10.772832    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:10 GMT
	I0728 18:40:10.772835    4457 round_trippers.go:580]     Audit-Id: 8878de70-45f8-4839-b86f-f063423caff9
	I0728 18:40:10.772838    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:10.772841    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:10.772844    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:10.772848    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:10.772850    4457 round_trippers.go:580]     Content-Length: 291
	I0728 18:40:10.772862    4457 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cdd02524-af69-44e9-9e2c-bfbb6e7d13b2","resourceVersion":"358","creationTimestamp":"2024-07-29T01:39:56Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0728 18:40:11.062303    4457 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0728 18:40:11.062320    4457 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0728 18:40:11.062326    4457 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0728 18:40:11.062331    4457 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0728 18:40:11.062335    4457 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0728 18:40:11.062339    4457 command_runner.go:130] > pod/storage-provisioner created
	I0728 18:40:11.062368    4457 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0728 18:40:11.062376    4457 main.go:141] libmachine: Making call to close driver server
	I0728 18:40:11.062385    4457 main.go:141] libmachine: (multinode-362000) Calling .Close
	I0728 18:40:11.062402    4457 main.go:141] libmachine: Making call to close driver server
	I0728 18:40:11.062409    4457 main.go:141] libmachine: (multinode-362000) Calling .Close
	I0728 18:40:11.062555    4457 main.go:141] libmachine: Successfully made call to close driver server
	I0728 18:40:11.062570    4457 main.go:141] libmachine: Making call to close connection to plugin binary
	I0728 18:40:11.062576    4457 main.go:141] libmachine: Successfully made call to close driver server
	I0728 18:40:11.062583    4457 main.go:141] libmachine: Making call to close driver server
	I0728 18:40:11.062586    4457 main.go:141] libmachine: Making call to close connection to plugin binary
	I0728 18:40:11.062590    4457 main.go:141] libmachine: (multinode-362000) Calling .Close
	I0728 18:40:11.062595    4457 main.go:141] libmachine: Making call to close driver server
	I0728 18:40:11.062599    4457 main.go:141] libmachine: (multinode-362000) DBG | Closing plugin on server side
	I0728 18:40:11.062601    4457 main.go:141] libmachine: (multinode-362000) Calling .Close
	I0728 18:40:11.062784    4457 main.go:141] libmachine: (multinode-362000) DBG | Closing plugin on server side
	I0728 18:40:11.062785    4457 main.go:141] libmachine: Successfully made call to close driver server
	I0728 18:40:11.062797    4457 main.go:141] libmachine: Making call to close connection to plugin binary
	I0728 18:40:11.062797    4457 main.go:141] libmachine: Successfully made call to close driver server
	I0728 18:40:11.062809    4457 main.go:141] libmachine: Making call to close connection to plugin binary
	I0728 18:40:11.062886    4457 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0728 18:40:11.062894    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:11.062903    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:11.062915    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:11.067002    4457 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 18:40:11.067014    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:11.067020    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:11.067023    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:11.067026    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:11.067030    4457 round_trippers.go:580]     Content-Length: 1273
	I0728 18:40:11.067033    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:11 GMT
	I0728 18:40:11.067037    4457 round_trippers.go:580]     Audit-Id: 8a900ef1-fe6e-4dbd-a7ec-344182a6729a
	I0728 18:40:11.067040    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:11.067428    4457 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"379"},"items":[{"metadata":{"name":"standard","uid":"b2f47efd-8c58-4f8d-ad0f-27dfc164889d","resourceVersion":"369","creationTimestamp":"2024-07-29T01:40:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-29T01:40:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0728 18:40:11.067670    4457 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b2f47efd-8c58-4f8d-ad0f-27dfc164889d","resourceVersion":"369","creationTimestamp":"2024-07-29T01:40:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-29T01:40:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0728 18:40:11.067704    4457 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0728 18:40:11.067710    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:11.067716    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:11.067721    4457 round_trippers.go:473]     Content-Type: application/json
	I0728 18:40:11.067723    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:11.070206    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:11.070215    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:11.070220    4457 round_trippers.go:580]     Content-Length: 1220
	I0728 18:40:11.070223    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:11 GMT
	I0728 18:40:11.070229    4457 round_trippers.go:580]     Audit-Id: 0e3733b8-b6c8-4111-84ee-078978e48daa
	I0728 18:40:11.070231    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:11.070235    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:11.070237    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:11.070239    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:11.070265    4457 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b2f47efd-8c58-4f8d-ad0f-27dfc164889d","resourceVersion":"369","creationTimestamp":"2024-07-29T01:40:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-29T01:40:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0728 18:40:11.070362    4457 main.go:141] libmachine: Making call to close driver server
	I0728 18:40:11.070370    4457 main.go:141] libmachine: (multinode-362000) Calling .Close
	I0728 18:40:11.070525    4457 main.go:141] libmachine: Successfully made call to close driver server
	I0728 18:40:11.070536    4457 main.go:141] libmachine: Making call to close connection to plugin binary
	I0728 18:40:11.094564    4457 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0728 18:40:11.135503    4457 addons.go:510] duration metric: took 850.452019ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0728 18:40:11.260114    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:11.260130    4457 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0728 18:40:11.260136    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:11.260144    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:11.260153    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:11.260192    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:11.260157    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:11.260230    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:11.262846    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:11.262856    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:11.262861    4457 round_trippers.go:580]     Audit-Id: 67b8a37c-428f-4a9a-960c-05602441f47b
	I0728 18:40:11.262864    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:11.262874    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:11.262880    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:11.262883    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:11.262886    4457 round_trippers.go:580]     Content-Length: 291
	I0728 18:40:11.262889    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:11 GMT
	I0728 18:40:11.262902    4457 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cdd02524-af69-44e9-9e2c-bfbb6e7d13b2","resourceVersion":"368","creationTimestamp":"2024-07-29T01:39:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0728 18:40:11.262948    4457 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-362000" context rescaled to 1 replicas
	I0728 18:40:11.263058    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:11.263070    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:11.263076    4457 round_trippers.go:580]     Audit-Id: ec8bd958-b06b-4f2f-979b-59534f7f9af2
	I0728 18:40:11.263080    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:11.263083    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:11.263086    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:11.263090    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:11.263093    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:11 GMT
	I0728 18:40:11.263231    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:11.759227    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:11.759246    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:11.759255    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:11.759259    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:11.761392    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:11.761402    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:11.761407    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:11.761424    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:11.761436    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:11 GMT
	I0728 18:40:11.761451    4457 round_trippers.go:580]     Audit-Id: a4047d81-9009-45a6-8539-024944a72d9e
	I0728 18:40:11.761462    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:11.761467    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:11.761666    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:12.260536    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:12.260552    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:12.260564    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:12.260570    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:12.262176    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:12.262211    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:12.262222    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:12.262247    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:12.262254    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:12.262257    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:12.262261    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:12 GMT
	I0728 18:40:12.262263    4457 round_trippers.go:580]     Audit-Id: f204e21c-9b48-455d-8622-6066ec208239
	I0728 18:40:12.262371    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:12.759261    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:12.759274    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:12.759280    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:12.759283    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:12.760765    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:12.760774    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:12.760781    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:12.760786    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:12 GMT
	I0728 18:40:12.760790    4457 round_trippers.go:580]     Audit-Id: 992ec9f1-85c8-43b7-a888-469a1f8515b4
	I0728 18:40:12.760794    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:12.760797    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:12.760800    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:12.761071    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:12.761255    4457 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:40:13.261024    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:13.261039    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:13.261046    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:13.261049    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:13.262638    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:13.262651    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:13.262659    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:13.262668    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:13.262672    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:13.262676    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:13.262679    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:13 GMT
	I0728 18:40:13.262682    4457 round_trippers.go:580]     Audit-Id: 0fffd104-8bd3-4fe0-9b9a-1cf59e471957
	I0728 18:40:13.262886    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:13.759154    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:13.759167    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:13.759174    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:13.759177    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:13.760713    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:13.760722    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:13.760726    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:13 GMT
	I0728 18:40:13.760730    4457 round_trippers.go:580]     Audit-Id: a45b0e81-84fa-4753-9739-8d28ab5d0a14
	I0728 18:40:13.760735    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:13.760738    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:13.760741    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:13.760744    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:13.760819    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:14.260145    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:14.260163    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:14.260172    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:14.260177    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:14.263013    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:14.263024    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:14.263030    4457 round_trippers.go:580]     Audit-Id: fdaf6131-25d1-47c2-acd1-446b115ffb8e
	I0728 18:40:14.263035    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:14.263038    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:14.263041    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:14.263044    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:14.263052    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:14 GMT
	I0728 18:40:14.263231    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:14.760316    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:14.760340    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:14.760351    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:14.760357    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:14.762851    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:14.762866    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:14.762873    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:14.762879    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:14.762888    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:14.762894    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:14 GMT
	I0728 18:40:14.762900    4457 round_trippers.go:580]     Audit-Id: af5c0ab8-4e6d-4849-bceb-87f698162cc2
	I0728 18:40:14.762906    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:14.762991    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:14.763230    4457 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:40:15.260614    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:15.260643    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:15.260728    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:15.260736    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:15.263024    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:15.263038    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:15.263052    4457 round_trippers.go:580]     Audit-Id: 2b9d5d3c-3d13-49b3-a08e-4f2d9d3a85ce
	I0728 18:40:15.263057    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:15.263062    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:15.263065    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:15.263069    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:15.263072    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:15 GMT
	I0728 18:40:15.263180    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:15.760417    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:15.760434    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:15.760443    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:15.760448    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:15.762872    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:15.762881    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:15.762886    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:15.762888    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:15.762891    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:15.762893    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:15 GMT
	I0728 18:40:15.762896    4457 round_trippers.go:580]     Audit-Id: 42794fb7-77b1-4a0e-b567-be7a6bfd31d6
	I0728 18:40:15.762899    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:15.763152    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:16.259813    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:16.259848    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:16.259927    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:16.259936    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:16.262408    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:16.262422    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:16.262430    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:16 GMT
	I0728 18:40:16.262434    4457 round_trippers.go:580]     Audit-Id: 14cecab9-6461-4105-9148-d34c0b44a270
	I0728 18:40:16.262439    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:16.262447    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:16.262451    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:16.262455    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:16.262556    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:16.761072    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:16.761173    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:16.761187    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:16.761195    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:16.763703    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:16.763731    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:16.763743    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:16.763750    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:16.763754    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:16.763757    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:16.763762    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:16 GMT
	I0728 18:40:16.763782    4457 round_trippers.go:580]     Audit-Id: 0d47ea48-d37e-4fce-a81d-002d90f6a056
	I0728 18:40:16.763938    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:16.764194    4457 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:40:17.259207    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:17.259228    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:17.259239    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:17.259249    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:17.262189    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:17.262202    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:17.262209    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:17.262214    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:17.262217    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:17 GMT
	I0728 18:40:17.262222    4457 round_trippers.go:580]     Audit-Id: 9812172f-e586-4c55-b4ee-2bea8ae5de51
	I0728 18:40:17.262226    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:17.262230    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:17.262686    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:17.759473    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:17.759496    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:17.759508    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:17.759513    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:17.762434    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:17.762476    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:17.762491    4457 round_trippers.go:580]     Audit-Id: 43b3d3be-06fc-46db-8a5d-e3d4d5eb47e6
	I0728 18:40:17.762499    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:17.762507    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:17.762532    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:17.762542    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:17.762549    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:17 GMT
	I0728 18:40:17.762674    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:18.260311    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:18.260340    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:18.260351    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:18.260358    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:18.262900    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:18.262915    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:18.262922    4457 round_trippers.go:580]     Audit-Id: 9953baba-073c-48aa-a6e9-278f0713ae83
	I0728 18:40:18.262927    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:18.262930    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:18.262933    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:18.262936    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:18.262941    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:18 GMT
	I0728 18:40:18.263036    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:18.759061    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:18.759085    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:18.759096    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:18.759112    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:18.761682    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:18.761694    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:18.761700    4457 round_trippers.go:580]     Audit-Id: 15b329d9-3ab4-44cd-9049-aa7b2decadcc
	I0728 18:40:18.761705    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:18.761709    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:18.761712    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:18.761717    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:18.761723    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:18 GMT
	I0728 18:40:18.762177    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:19.259000    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:19.259018    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:19.259024    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:19.259030    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:19.261212    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:19.261224    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:19.261230    4457 round_trippers.go:580]     Audit-Id: 415b93b9-1aa3-44bd-a820-1208b592c064
	I0728 18:40:19.261233    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:19.261236    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:19.261239    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:19.261241    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:19.261247    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:19 GMT
	I0728 18:40:19.261358    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:19.261584    4457 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:40:19.759307    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:19.759329    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:19.759341    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:19.759350    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:19.761775    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:19.761788    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:19.761795    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:19 GMT
	I0728 18:40:19.761800    4457 round_trippers.go:580]     Audit-Id: 502bf989-c4d0-445b-b01d-1f1d535adf96
	I0728 18:40:19.761804    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:19.761807    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:19.761811    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:19.761814    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:19.762181    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:20.260567    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:20.260597    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:20.260609    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:20.260617    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:20.264657    4457 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 18:40:20.264674    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:20.264682    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:20.264685    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:20 GMT
	I0728 18:40:20.264690    4457 round_trippers.go:580]     Audit-Id: 717f088c-b9a2-4bec-a160-3a9f814ada36
	I0728 18:40:20.264694    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:20.264721    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:20.264725    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:20.264782    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:20.759050    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:20.759074    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:20.759087    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:20.759094    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:20.761793    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:20.761807    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:20.761815    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:20.761819    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:20.761823    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:20.761827    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:20 GMT
	I0728 18:40:20.761831    4457 round_trippers.go:580]     Audit-Id: c4d77fc1-3159-4dbe-8f91-6d29cc9ceefa
	I0728 18:40:20.761835    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:20.762217    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:21.259770    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:21.259798    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:21.259811    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:21.259819    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:21.262872    4457 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:40:21.262887    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:21.262897    4457 round_trippers.go:580]     Audit-Id: e08cd4a9-4dc5-4bfd-9602-71f114db843d
	I0728 18:40:21.262905    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:21.262921    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:21.262930    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:21.262935    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:21.262941    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:21 GMT
	I0728 18:40:21.263367    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:21.263621    4457 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:40:21.758899    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:21.758911    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:21.758917    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:21.758920    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:21.760397    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:21.760407    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:21.760412    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:21.760416    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:21 GMT
	I0728 18:40:21.760420    4457 round_trippers.go:580]     Audit-Id: 1845b54f-7048-420b-a9cc-345ce940e0f8
	I0728 18:40:21.760423    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:21.760427    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:21.760432    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:21.760518    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:22.260348    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:22.260380    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:22.260392    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:22.260474    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:22.264848    4457 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 18:40:22.264870    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:22.264881    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:22.264888    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:22.264894    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:22.264901    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:22 GMT
	I0728 18:40:22.264906    4457 round_trippers.go:580]     Audit-Id: a34d53ed-e866-4fba-b73c-e127bd7eee1c
	I0728 18:40:22.264912    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:22.265137    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:22.759929    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:22.759985    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:22.759997    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:22.760004    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:22.762275    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:22.762297    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:22.762304    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:22 GMT
	I0728 18:40:22.762309    4457 round_trippers.go:580]     Audit-Id: 4601913b-d210-4463-8d21-9d1c3a5d5d24
	I0728 18:40:22.762320    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:22.762325    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:22.762329    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:22.762332    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:22.762513    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:23.258971    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:23.258995    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:23.259006    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:23.259013    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:23.261485    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:23.261504    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:23.261512    4457 round_trippers.go:580]     Audit-Id: 09c1c715-a373-48d3-867c-e9bc394b5dae
	I0728 18:40:23.261516    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:23.261519    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:23.261529    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:23.261533    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:23.261537    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:23 GMT
	I0728 18:40:23.261656    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:23.759001    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:23.759024    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:23.759036    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:23.759043    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:23.761072    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:23.761090    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:23.761102    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:23.761110    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:23 GMT
	I0728 18:40:23.761116    4457 round_trippers.go:580]     Audit-Id: 8589fcb8-03dd-4451-ad7d-6a234a984cf4
	I0728 18:40:23.761121    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:23.761124    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:23.761127    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:23.761321    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:23.761508    4457 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:40:24.259348    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:24.259370    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:24.259381    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:24.259386    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:24.261950    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:24.261964    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:24.261971    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:24 GMT
	I0728 18:40:24.261976    4457 round_trippers.go:580]     Audit-Id: a347f63d-0547-4dc0-9887-ecb808f30b7b
	I0728 18:40:24.261979    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:24.261982    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:24.261985    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:24.261989    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:24.262374    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:24.759752    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:24.759776    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:24.759789    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:24.759798    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:24.762449    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:24.762465    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:24.762473    4457 round_trippers.go:580]     Audit-Id: b4270990-e2d6-4d91-9a90-e915982c91b7
	I0728 18:40:24.762477    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:24.762481    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:24.762484    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:24.762487    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:24.762491    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:24 GMT
	I0728 18:40:24.762565    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"323","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0728 18:40:25.258884    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:25.258912    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:25.258933    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:25.258940    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:25.261294    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:25.261304    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:25.261309    4457 round_trippers.go:580]     Audit-Id: 34079ddc-0650-4e56-8564-dc888c5c3890
	I0728 18:40:25.261313    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:25.261315    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:25.261318    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:25.261321    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:25.261323    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:25 GMT
	I0728 18:40:25.261501    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"397","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0728 18:40:25.261699    4457 node_ready.go:49] node "multinode-362000" has status "Ready":"True"
	I0728 18:40:25.261711    4457 node_ready.go:38] duration metric: took 14.503038736s for node "multinode-362000" to be "Ready" ...
	I0728 18:40:25.261719    4457 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:40:25.261757    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:40:25.261762    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:25.261768    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:25.261772    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:25.264121    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:25.264129    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:25.264134    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:25.264137    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:25.264141    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:25.264143    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:25 GMT
	I0728 18:40:25.264146    4457 round_trippers.go:580]     Audit-Id: 48379578-ba8f-4422-8fad-492237524d4c
	I0728 18:40:25.264149    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:25.265103    4457 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"404"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"402","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0728 18:40:25.267406    4457 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:25.267459    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:40:25.267464    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:25.267470    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:25.267474    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:25.269607    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:25.269614    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:25.269618    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:25.269621    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:25.269624    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:25.269626    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:25 GMT
	I0728 18:40:25.269629    4457 round_trippers.go:580]     Audit-Id: 1f7e9486-994e-4bba-9cc5-e40bb9316b31
	I0728 18:40:25.269631    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:25.269920    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"402","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0728 18:40:25.270183    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:25.270199    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:25.270206    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:25.270210    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:25.271439    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:25.271446    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:25.271451    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:25 GMT
	I0728 18:40:25.271454    4457 round_trippers.go:580]     Audit-Id: 18a2f255-e2cf-44ac-8a5f-27ec41ee30e6
	I0728 18:40:25.271466    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:25.271469    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:25.271471    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:25.271474    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:25.271730    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"397","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0728 18:40:25.769046    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:40:25.769075    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:25.769088    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:25.769095    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:25.771961    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:25.771999    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:25.772032    4457 round_trippers.go:580]     Audit-Id: 6d71ec48-2f1b-44c7-8856-88eb807ea518
	I0728 18:40:25.772047    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:25.772057    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:25.772062    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:25.772066    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:25.772069    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:26 GMT
	I0728 18:40:25.772230    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"402","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0728 18:40:25.772599    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:25.772608    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:25.772616    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:25.772625    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:25.773942    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:25.773951    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:25.773956    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:25.773963    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:26 GMT
	I0728 18:40:25.773967    4457 round_trippers.go:580]     Audit-Id: 879544d9-d129-4f7f-b284-bedb5f99a847
	I0728 18:40:25.773970    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:25.773973    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:25.773976    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:25.774043    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"397","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0728 18:40:26.268338    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:40:26.268362    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.268374    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.268380    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.270981    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:26.270995    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.271002    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:26 GMT
	I0728 18:40:26.271006    4457 round_trippers.go:580]     Audit-Id: 164daf9b-5def-4288-b3e8-3352b2b53d96
	I0728 18:40:26.271010    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.271013    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.271017    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.271021    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.271250    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"402","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0728 18:40:26.271627    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:26.271637    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.271647    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.271652    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.273317    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:26.273327    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.273333    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.273337    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:26 GMT
	I0728 18:40:26.273339    4457 round_trippers.go:580]     Audit-Id: 20ddf9f1-b697-42ab-95d3-a57551a5f208
	I0728 18:40:26.273342    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.273344    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.273347    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.273588    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"397","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0728 18:40:26.768053    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:40:26.768081    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.768178    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.768194    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.770949    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:26.770969    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.770981    4457 round_trippers.go:580]     Audit-Id: da51901c-9414-4d31-8654-2201f2fd0fb0
	I0728 18:40:26.770989    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.770995    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.771000    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.771005    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.771011    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.771233    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"416","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0728 18:40:26.771608    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:26.771619    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.771627    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.771632    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.773197    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:26.773207    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.773215    4457 round_trippers.go:580]     Audit-Id: f0b8501e-4f89-477d-9cef-d2242ada3831
	I0728 18:40:26.773219    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.773223    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.773227    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.773230    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.773234    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.773342    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"397","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0728 18:40:26.773509    4457 pod_ready.go:92] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"True"
	I0728 18:40:26.773518    4457 pod_ready.go:81] duration metric: took 1.506130085s for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.773524    4457 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.773559    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-362000
	I0728 18:40:26.773563    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.773569    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.773573    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.774648    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:26.774677    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.774682    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.774686    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.774688    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.774710    4457 round_trippers.go:580]     Audit-Id: a93d1b3b-c0dc-49e5-9e03-05b639755669
	I0728 18:40:26.774718    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.774721    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.774932    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-362000","namespace":"kube-system","uid":"7b75e781-36f1-4f6f-99a4-808974571bcd","resourceVersion":"337","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.mirror":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.seen":"2024-07-29T01:39:56.230156002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0728 18:40:26.775142    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:26.775149    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.775155    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.775159    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.776234    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:26.776240    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.776245    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.776248    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.776251    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.776254    4457 round_trippers.go:580]     Audit-Id: f465b160-4449-43a1-839f-6ac58d16f9f2
	I0728 18:40:26.776256    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.776259    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.776353    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"397","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0728 18:40:26.776506    4457 pod_ready.go:92] pod "etcd-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:40:26.776513    4457 pod_ready.go:81] duration metric: took 2.983958ms for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.776522    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.776552    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-362000
	I0728 18:40:26.776556    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.776562    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.776564    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.777481    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:40:26.777489    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.777494    4457 round_trippers.go:580]     Audit-Id: 9e66dcbf-7231-4e5a-beff-b5411a839d42
	I0728 18:40:26.777501    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.777504    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.777507    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.777511    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.777521    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.777674    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-362000","namespace":"kube-system","uid":"95b0fc9b-aad1-47ad-ae00-439b4e4b905a","resourceVersion":"392","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.mirror":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.seen":"2024-07-29T01:39:56.230158669Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0728 18:40:26.777905    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:26.777911    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.777917    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.777921    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.778916    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:40:26.778925    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.778933    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.778938    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.778944    4457 round_trippers.go:580]     Audit-Id: a29b0d70-1574-4be0-90fe-5dfab0fef028
	I0728 18:40:26.778948    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.778952    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.778955    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.779141    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"397","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0728 18:40:26.779305    4457 pod_ready.go:92] pod "kube-apiserver-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:40:26.779313    4457 pod_ready.go:81] duration metric: took 2.783586ms for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.779319    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.779353    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-362000
	I0728 18:40:26.779358    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.779364    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.779368    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.780324    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:40:26.780331    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.780335    4457 round_trippers.go:580]     Audit-Id: 602ba5d4-072e-4f9e-8a98-2bfc5daa09d3
	I0728 18:40:26.780339    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.780343    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.780346    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.780349    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.780352    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.780544    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-362000","namespace":"kube-system","uid":"5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9","resourceVersion":"391","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.mirror":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.seen":"2024-07-29T01:39:56.230159555Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0728 18:40:26.780770    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:26.780778    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.780783    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.780787    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.781618    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:40:26.781626    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.781632    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.781637    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.781647    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.781651    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.781654    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.781657    4457 round_trippers.go:580]     Audit-Id: c29fa219-546a-4898-9545-0969dd593e05
	I0728 18:40:26.781801    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"397","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0728 18:40:26.781958    4457 pod_ready.go:92] pod "kube-controller-manager-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:40:26.781965    4457 pod_ready.go:81] duration metric: took 2.640467ms for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.781970    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.782003    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:40:26.782008    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.782014    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.782017    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.783057    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:26.783066    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.783072    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.783077    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.783080    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.783083    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.783086    4457 round_trippers.go:580]     Audit-Id: 3354da0f-5ec4-49af-a334-d357f510f9be
	I0728 18:40:26.783090    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.783266    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tz5h5","generateName":"kube-proxy-","namespace":"kube-system","uid":"f791f783-464c-485b-9eda-97a5f857cca4","resourceVersion":"381","creationTimestamp":"2024-07-29T01:40:09Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0728 18:40:26.861008    4457 request.go:629] Waited for 77.48576ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:26.861115    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:26.861124    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:26.861135    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:26.861154    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:26.863738    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:26.863749    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:26.863756    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:26.863760    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:26.863764    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:26.863768    4457 round_trippers.go:580]     Audit-Id: 1fde7d92-a96a-4b09-8228-6f9f1d406488
	I0728 18:40:26.863773    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:26.863776    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:26.863951    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0728 18:40:26.864199    4457 pod_ready.go:92] pod "kube-proxy-tz5h5" in "kube-system" namespace has status "Ready":"True"
	I0728 18:40:26.864210    4457 pod_ready.go:81] duration metric: took 82.236902ms for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:26.864219    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:27.059279    4457 request.go:629] Waited for 195.00583ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:40:27.059435    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:40:27.059447    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:27.059458    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:27.059476    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:27.062273    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:27.062288    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:27.062301    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:27.062305    4457 round_trippers.go:580]     Audit-Id: 5fc54c2f-5bde-4311-8ecb-a36886b3ae53
	I0728 18:40:27.062309    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:27.062312    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:27.062316    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:27.062329    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:27.062438    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-362000","namespace":"kube-system","uid":"0299d0c0-d45d-45ee-9b8e-b5900e92694b","resourceVersion":"344","creationTimestamp":"2024-07-29T01:39:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.mirror":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.seen":"2024-07-29T01:39:50.867492603Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0728 18:40:27.259622    4457 request.go:629] Waited for 196.881917ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:27.259819    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:40:27.259830    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:27.259840    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:27.259846    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:27.262566    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:27.262582    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:27.262589    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:27.262595    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:27.262599    4457 round_trippers.go:580]     Audit-Id: 5da50520-f933-4df1-8168-3b169328594d
	I0728 18:40:27.262603    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:27.262607    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:27.262611    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:27.262724    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0728 18:40:27.262971    4457 pod_ready.go:92] pod "kube-scheduler-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:40:27.262982    4457 pod_ready.go:81] duration metric: took 398.764855ms for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:40:27.262995    4457 pod_ready.go:38] duration metric: took 2.001302979s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:40:27.263017    4457 api_server.go:52] waiting for apiserver process to appear ...
	I0728 18:40:27.263087    4457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:40:27.276508    4457 command_runner.go:130] > 2038
	I0728 18:40:27.276759    4457 api_server.go:72] duration metric: took 16.992042297s to wait for apiserver process to appear ...
	I0728 18:40:27.276767    4457 api_server.go:88] waiting for apiserver healthz status ...
	I0728 18:40:27.276782    4457 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:40:27.280519    4457 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0728 18:40:27.280557    4457 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0728 18:40:27.280562    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:27.280568    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:27.280572    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:27.281045    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:40:27.281053    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:27.281058    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:27.281061    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:27.281063    4457 round_trippers.go:580]     Content-Length: 263
	I0728 18:40:27.281067    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:27.281070    4457 round_trippers.go:580]     Audit-Id: f10d8667-9a4e-495c-9813-468f64fff001
	I0728 18:40:27.281073    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:27.281075    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:27.281124    4457 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0728 18:40:27.281174    4457 api_server.go:141] control plane version: v1.30.3
	I0728 18:40:27.281185    4457 api_server.go:131] duration metric: took 4.413875ms to wait for apiserver health ...
	I0728 18:40:27.281192    4457 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 18:40:27.459120    4457 request.go:629] Waited for 177.816231ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:40:27.459188    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:40:27.459196    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:27.459207    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:27.459213    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:27.462655    4457 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:40:27.462673    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:27.462680    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:27.462696    4457 round_trippers.go:580]     Audit-Id: 65569108-b6b0-48e7-a3b6-eaec6a9c3e0d
	I0728 18:40:27.462701    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:27.462705    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:27.462711    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:27.462713    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:27.463508    4457 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"423"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"416","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0728 18:40:27.464758    4457 system_pods.go:59] 8 kube-system pods found
	I0728 18:40:27.464773    4457 system_pods.go:61] "coredns-7db6d8ff4d-8npcw" [a0fcbb6f-1182-4d9e-bc04-456f1b4de1db] Running
	I0728 18:40:27.464777    4457 system_pods.go:61] "etcd-multinode-362000" [7b75e781-36f1-4f6f-99a4-808974571bcd] Running
	I0728 18:40:27.464780    4457 system_pods.go:61] "kindnet-4mw5v" [053773ee-043a-48e0-9f70-411430b19acd] Running
	I0728 18:40:27.464785    4457 system_pods.go:61] "kube-apiserver-multinode-362000" [95b0fc9b-aad1-47ad-ae00-439b4e4b905a] Running
	I0728 18:40:27.464790    4457 system_pods.go:61] "kube-controller-manager-multinode-362000" [5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9] Running
	I0728 18:40:27.464793    4457 system_pods.go:61] "kube-proxy-tz5h5" [f791f783-464c-485b-9eda-97a5f857cca4] Running
	I0728 18:40:27.464796    4457 system_pods.go:61] "kube-scheduler-multinode-362000" [0299d0c0-d45d-45ee-9b8e-b5900e92694b] Running
	I0728 18:40:27.464799    4457 system_pods.go:61] "storage-provisioner" [9032906f-5102-4224-b894-d541cf7d67e7] Running
	I0728 18:40:27.464803    4457 system_pods.go:74] duration metric: took 183.610259ms to wait for pod list to return data ...
	I0728 18:40:27.464807    4457 default_sa.go:34] waiting for default service account to be created ...
	I0728 18:40:27.659087    4457 request.go:629] Waited for 194.191537ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0728 18:40:27.659137    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0728 18:40:27.659145    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:27.659154    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:27.659161    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:27.661928    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:27.661943    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:27.661950    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:27.661954    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:27.661958    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:27.661971    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:27.661976    4457 round_trippers.go:580]     Content-Length: 261
	I0728 18:40:27.661979    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:27 GMT
	I0728 18:40:27.661983    4457 round_trippers.go:580]     Audit-Id: 4aadf24d-c89f-41c9-8a53-a3e69516a618
	I0728 18:40:27.662004    4457 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"379c0dca-8465-4249-afbe-a226c72714a6","resourceVersion":"334","creationTimestamp":"2024-07-29T01:40:10Z"}}]}
	I0728 18:40:27.662149    4457 default_sa.go:45] found service account: "default"
	I0728 18:40:27.662162    4457 default_sa.go:55] duration metric: took 197.353594ms for default service account to be created ...
	I0728 18:40:27.662170    4457 system_pods.go:116] waiting for k8s-apps to be running ...
	I0728 18:40:27.859543    4457 request.go:629] Waited for 197.334207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:40:27.859667    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:40:27.859679    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:27.859690    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:27.859705    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:27.863099    4457 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:40:27.863114    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:27.863124    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:27.863132    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:27.863140    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:27.863145    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:27.863151    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:28 GMT
	I0728 18:40:27.863156    4457 round_trippers.go:580]     Audit-Id: 89372aba-9228-4ce7-8c3e-9ba696ef14dc
	I0728 18:40:27.863738    4457 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"416","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0728 18:40:27.864990    4457 system_pods.go:86] 8 kube-system pods found
	I0728 18:40:27.865001    4457 system_pods.go:89] "coredns-7db6d8ff4d-8npcw" [a0fcbb6f-1182-4d9e-bc04-456f1b4de1db] Running
	I0728 18:40:27.865004    4457 system_pods.go:89] "etcd-multinode-362000" [7b75e781-36f1-4f6f-99a4-808974571bcd] Running
	I0728 18:40:27.865008    4457 system_pods.go:89] "kindnet-4mw5v" [053773ee-043a-48e0-9f70-411430b19acd] Running
	I0728 18:40:27.865011    4457 system_pods.go:89] "kube-apiserver-multinode-362000" [95b0fc9b-aad1-47ad-ae00-439b4e4b905a] Running
	I0728 18:40:27.865014    4457 system_pods.go:89] "kube-controller-manager-multinode-362000" [5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9] Running
	I0728 18:40:27.865017    4457 system_pods.go:89] "kube-proxy-tz5h5" [f791f783-464c-485b-9eda-97a5f857cca4] Running
	I0728 18:40:27.865020    4457 system_pods.go:89] "kube-scheduler-multinode-362000" [0299d0c0-d45d-45ee-9b8e-b5900e92694b] Running
	I0728 18:40:27.865026    4457 system_pods.go:89] "storage-provisioner" [9032906f-5102-4224-b894-d541cf7d67e7] Running
	I0728 18:40:27.865031    4457 system_pods.go:126] duration metric: took 202.861517ms to wait for k8s-apps to be running ...
	I0728 18:40:27.865036    4457 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 18:40:27.865087    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:40:27.877189    4457 system_svc.go:56] duration metric: took 12.148245ms WaitForService to wait for kubelet
	I0728 18:40:27.877209    4457 kubeadm.go:582] duration metric: took 17.592503941s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:40:27.877222    4457 node_conditions.go:102] verifying NodePressure condition ...
	I0728 18:40:28.060545    4457 request.go:629] Waited for 183.186568ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0728 18:40:28.060617    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0728 18:40:28.060627    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:28.060638    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:28.060649    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:28.063062    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:28.063077    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:28.063084    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:28.063088    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:28 GMT
	I0728 18:40:28.063092    4457 round_trippers.go:580]     Audit-Id: 23ca1e57-1ef9-463d-9917-6293510499e5
	I0728 18:40:28.063095    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:28.063099    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:28.063103    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:28.063178    4457 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0728 18:40:28.063479    4457 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:40:28.063507    4457 node_conditions.go:123] node cpu capacity is 2
	I0728 18:40:28.063533    4457 node_conditions.go:105] duration metric: took 186.30824ms to run NodePressure ...
	I0728 18:40:28.063551    4457 start.go:241] waiting for startup goroutines ...
	I0728 18:40:28.063559    4457 start.go:246] waiting for cluster config update ...
	I0728 18:40:28.063575    4457 start.go:255] writing updated cluster config ...
	I0728 18:40:28.085419    4457 out.go:177] 
	I0728 18:40:28.108631    4457 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:40:28.108721    4457 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:40:28.131178    4457 out.go:177] * Starting "multinode-362000-m02" worker node in "multinode-362000" cluster
	I0728 18:40:28.173202    4457 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:40:28.173236    4457 cache.go:56] Caching tarball of preloaded images
	I0728 18:40:28.173439    4457 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 18:40:28.173458    4457 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:40:28.173553    4457 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:40:28.174529    4457 start.go:360] acquireMachinesLock for multinode-362000-m02: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:40:28.174649    4457 start.go:364] duration metric: took 96.396µs to acquireMachinesLock for "multinode-362000-m02"
	I0728 18:40:28.174677    4457 start.go:93] Provisioning new machine with config: &{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 18:40:28.174767    4457 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0728 18:40:28.196279    4457 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:40:28.196422    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:28.196454    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:28.206069    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52553
	I0728 18:40:28.206413    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:28.206778    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:28.206797    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:28.207019    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:28.207174    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetMachineName
	I0728 18:40:28.207296    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:28.207409    4457 start.go:159] libmachine.API.Create for "multinode-362000" (driver="hyperkit")
	I0728 18:40:28.207424    4457 client.go:168] LocalClient.Create starting
	I0728 18:40:28.207452    4457 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem
	I0728 18:40:28.207509    4457 main.go:141] libmachine: Decoding PEM data...
	I0728 18:40:28.207521    4457 main.go:141] libmachine: Parsing certificate...
	I0728 18:40:28.207570    4457 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem
	I0728 18:40:28.207609    4457 main.go:141] libmachine: Decoding PEM data...
	I0728 18:40:28.207621    4457 main.go:141] libmachine: Parsing certificate...
	I0728 18:40:28.207639    4457 main.go:141] libmachine: Running pre-create checks...
	I0728 18:40:28.207644    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .PreCreateCheck
	I0728 18:40:28.207727    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:28.207759    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetConfigRaw
	I0728 18:40:28.217427    4457 main.go:141] libmachine: Creating machine...
	I0728 18:40:28.217457    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .Create
	I0728 18:40:28.217681    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:28.217968    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | I0728 18:40:28.217664    4485 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 18:40:28.218070    4457 main.go:141] libmachine: (multinode-362000-m02) Downloading /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1006/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0728 18:40:28.417113    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | I0728 18:40:28.417024    4485 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa...
	I0728 18:40:28.458969    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | I0728 18:40:28.458896    4485 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/multinode-362000-m02.rawdisk...
	I0728 18:40:28.458979    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Writing magic tar header
	I0728 18:40:28.458991    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Writing SSH key tar header
	I0728 18:40:28.459389    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | I0728 18:40:28.459351    4485 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02 ...
	I0728 18:40:28.887087    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:28.887108    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/hyperkit.pid
	I0728 18:40:28.887119    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Using UUID 803737f6-60f1-4d1a-bdda-22c83e05ebd1
	I0728 18:40:28.912735    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Generated MAC 6:55:c7:17:95:12
	I0728 18:40:28.912762    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000
	I0728 18:40:28.912830    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"803737f6-60f1-4d1a-bdda-22c83e05ebd1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0728 18:40:28.912879    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"803737f6-60f1-4d1a-bdda-22c83e05ebd1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0728 18:40:28.912926    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "803737f6-60f1-4d1a-bdda-22c83e05ebd1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/multinode-362000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/bzimage,/Users/j
enkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"}
	I0728 18:40:28.912966    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 803737f6-60f1-4d1a-bdda-22c83e05ebd1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/multinode-362000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/mult
inode-362000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"
	I0728 18:40:28.912996    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 18:40:28.915928    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 DEBUG: hyperkit: Pid is 4486
	I0728 18:40:28.916380    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Attempt 0
	I0728 18:40:28.916404    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:28.916470    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:40:28.917361    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Searching for 6:55:c7:17:95:12 in /var/db/dhcpd_leases ...
	I0728 18:40:28.917452    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0728 18:40:28.917486    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:40:28.917522    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:40:28.917550    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:40:28.917570    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:40:28.917584    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:40:28.917592    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:40:28.917600    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:40:28.917608    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:40:28.917626    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:40:28.917639    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:40:28.917668    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:40:28.917685    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:40:28.923387    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 18:40:28.931573    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 18:40:28.932540    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:40:28.932561    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:40:28.932582    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:40:28.932598    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:40:29.320577    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 18:40:29.320592    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 18:40:29.435884    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:40:29.435905    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:40:29.435916    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:40:29.435943    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:40:29.436773    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 18:40:29.436784    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 18:40:30.918501    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Attempt 1
	I0728 18:40:30.918517    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:30.918587    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:40:30.919342    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Searching for 6:55:c7:17:95:12 in /var/db/dhcpd_leases ...
	I0728 18:40:30.919399    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0728 18:40:30.919420    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:40:30.919429    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:40:30.919438    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:40:30.919449    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:40:30.919490    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:40:30.919500    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:40:30.919512    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:40:30.919520    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:40:30.919527    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:40:30.919540    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:40:30.919547    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:40:30.919555    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:40:32.919568    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Attempt 2
	I0728 18:40:32.919590    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:32.919676    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:40:32.920412    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Searching for 6:55:c7:17:95:12 in /var/db/dhcpd_leases ...
	I0728 18:40:32.920465    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0728 18:40:32.920480    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:40:32.920489    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:40:32.920497    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:40:32.920503    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:40:32.920509    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:40:32.920517    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:40:32.920525    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:40:32.920530    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:40:32.920537    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:40:32.920542    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:40:32.920555    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:40:32.920567    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:40:34.921463    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Attempt 3
	I0728 18:40:34.921477    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:34.921597    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:40:34.922396    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Searching for 6:55:c7:17:95:12 in /var/db/dhcpd_leases ...
	I0728 18:40:34.922462    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0728 18:40:34.922476    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:40:34.922489    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:40:34.922497    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:40:34.922527    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:40:34.922538    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:40:34.922546    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:40:34.922554    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:40:34.922562    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:40:34.922571    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:40:34.922577    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:40:34.922590    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:40:34.922602    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:40:35.064243    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:35 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0728 18:40:35.064420    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:35 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0728 18:40:35.064430    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:35 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0728 18:40:35.086991    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:40:35 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0728 18:40:36.923506    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Attempt 4
	I0728 18:40:36.923520    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:36.923644    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:40:36.924397    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Searching for 6:55:c7:17:95:12 in /var/db/dhcpd_leases ...
	I0728 18:40:36.924465    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0728 18:40:36.924489    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:40:36.924514    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:5a:b:23:f6:8a:62 ID:1,5a:b:23:f6:8a:62 Lease:0x66a6f235}
	I0728 18:40:36.924526    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:36:b9:93:b4:48 ID:1,86:36:b9:93:b4:48 Lease:0x66a8436a}
	I0728 18:40:36.924535    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:ce:aa:48:68:3a ID:1,b2:ce:aa:48:68:3a Lease:0x66a8432b}
	I0728 18:40:36.924544    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:fa:f2:17:be:3c:e7 ID:1,fa:f2:17:be:3c:e7 Lease:0x66a6f1a0}
	I0728 18:40:36.924551    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:32:4f:62:fc:87:4f ID:1,32:4f:62:fc:87:4f Lease:0x66a84257}
	I0728 18:40:36.924560    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:92:4:49:a3:bc:cb ID:1,92:4:49:a3:bc:cb Lease:0x66a6f105}
	I0728 18:40:36.924567    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:da:96:e7:56:73:66 ID:1,da:96:e7:56:73:66 Lease:0x66a841e8}
	I0728 18:40:36.924574    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:f7:34:b6:18:f ID:1,9a:f7:34:b6:18:f Lease:0x66a842aa}
	I0728 18:40:36.924587    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:52:f5:df:74:5 ID:1,22:52:f5:df:74:5 Lease:0x66a83a67}
	I0728 18:40:36.924595    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:e2:fb:8b:30:31 ID:1,42:e2:fb:8b:30:31 Lease:0x66a839a2}
	I0728 18:40:36.924604    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:32:60:55:a4:6b ID:1,f6:32:60:55:a4:6b Lease:0x66a8380d}
	I0728 18:40:38.926293    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Attempt 5
	I0728 18:40:38.926311    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:38.926416    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:40:38.927187    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Searching for 6:55:c7:17:95:12 in /var/db/dhcpd_leases ...
	I0728 18:40:38.927233    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0728 18:40:38.927266    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a84496}
	I0728 18:40:38.927293    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | Found match: 6:55:c7:17:95:12
	I0728 18:40:38.927328    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | IP: 192.169.0.14
	I0728 18:40:38.927369    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetConfigRaw
	I0728 18:40:38.927999    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:38.928131    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:38.928238    4457 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0728 18:40:38.928247    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetState
	I0728 18:40:38.928325    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:38.928394    4457 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:40:38.929157    4457 main.go:141] libmachine: Detecting operating system of created instance...
	I0728 18:40:38.929165    4457 main.go:141] libmachine: Waiting for SSH to be available...
	I0728 18:40:38.929169    4457 main.go:141] libmachine: Getting to WaitForSSH function...
	I0728 18:40:38.929174    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:38.929261    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:38.929352    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:38.929452    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:38.929561    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:38.929698    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:40:38.929908    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:40:38.929916    4457 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0728 18:40:39.947133    4457 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0728 18:40:42.997563    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:40:42.997576    4457 main.go:141] libmachine: Detecting the provisioner...
	I0728 18:40:42.997582    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:42.997714    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:42.997827    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:42.997912    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:42.997996    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:42.998124    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:40:42.998272    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:40:42.998280    4457 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0728 18:40:43.045978    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0728 18:40:43.046023    4457 main.go:141] libmachine: found compatible host: buildroot
	I0728 18:40:43.046030    4457 main.go:141] libmachine: Provisioning with buildroot...
	I0728 18:40:43.046037    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetMachineName
	I0728 18:40:43.046170    4457 buildroot.go:166] provisioning hostname "multinode-362000-m02"
	I0728 18:40:43.046181    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetMachineName
	I0728 18:40:43.046287    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:43.046370    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:43.046448    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.046527    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.046623    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:43.046740    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:40:43.046874    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:40:43.046882    4457 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-362000-m02 && echo "multinode-362000-m02" | sudo tee /etc/hostname
	I0728 18:40:43.105059    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-362000-m02
	
	I0728 18:40:43.105080    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:43.105211    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:43.105320    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.105409    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.105504    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:43.105643    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:40:43.105802    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:40:43.105819    4457 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-362000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-362000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-362000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 18:40:43.169701    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:40:43.169727    4457 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 18:40:43.169738    4457 buildroot.go:174] setting up certificates
	I0728 18:40:43.169744    4457 provision.go:84] configureAuth start
	I0728 18:40:43.169752    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetMachineName
	I0728 18:40:43.169898    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:40:43.170014    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:43.170106    4457 provision.go:143] copyHostCerts
	I0728 18:40:43.170135    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:40:43.170197    4457 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 18:40:43.170203    4457 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:40:43.170514    4457 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 18:40:43.170722    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:40:43.170768    4457 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 18:40:43.170773    4457 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:40:43.170856    4457 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 18:40:43.171009    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:40:43.171051    4457 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 18:40:43.171056    4457 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:40:43.171141    4457 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 18:40:43.171299    4457 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.multinode-362000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-362000-m02]
	I0728 18:40:43.298073    4457 provision.go:177] copyRemoteCerts
	I0728 18:40:43.298125    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 18:40:43.298138    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:43.298279    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:43.298379    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.298491    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:43.298573    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:40:43.329778    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 18:40:43.329849    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 18:40:43.349799    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 18:40:43.349871    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0728 18:40:43.369649    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 18:40:43.369722    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 18:40:43.389798    4457 provision.go:87] duration metric: took 220.050649ms to configureAuth
	I0728 18:40:43.389813    4457 buildroot.go:189] setting minikube options for container-runtime
	I0728 18:40:43.389957    4457 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:40:43.389970    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:43.390115    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:43.390206    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:43.390303    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.390377    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.390451    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:43.390588    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:40:43.390713    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:40:43.390721    4457 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 18:40:43.439593    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 18:40:43.439605    4457 buildroot.go:70] root file system type: tmpfs
	I0728 18:40:43.439687    4457 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 18:40:43.439700    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:43.439834    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:43.439933    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.440017    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.440100    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:43.440224    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:40:43.440371    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:40:43.440415    4457 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 18:40:43.501624    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 18:40:43.501641    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:43.501774    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:43.501873    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.501958    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:43.502046    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:43.502176    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:40:43.502316    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:40:43.502328    4457 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 18:40:45.035137    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0728 18:40:45.035154    4457 main.go:141] libmachine: Checking connection to Docker...
	I0728 18:40:45.035161    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetURL
	I0728 18:40:45.035314    4457 main.go:141] libmachine: Docker is up and running!
	I0728 18:40:45.035322    4457 main.go:141] libmachine: Reticulating splines...
	I0728 18:40:45.035327    4457 client.go:171] duration metric: took 16.828230217s to LocalClient.Create
	I0728 18:40:45.035339    4457 start.go:167] duration metric: took 16.828263949s to libmachine.API.Create "multinode-362000"
	I0728 18:40:45.035344    4457 start.go:293] postStartSetup for "multinode-362000-m02" (driver="hyperkit")
	I0728 18:40:45.035351    4457 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 18:40:45.035361    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:45.035510    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 18:40:45.035522    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:45.035604    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:45.035702    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:45.035791    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:45.035884    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:40:45.066439    4457 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 18:40:45.069494    4457 command_runner.go:130] > NAME=Buildroot
	I0728 18:40:45.069503    4457 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0728 18:40:45.069509    4457 command_runner.go:130] > ID=buildroot
	I0728 18:40:45.069515    4457 command_runner.go:130] > VERSION_ID=2023.02.9
	I0728 18:40:45.069519    4457 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0728 18:40:45.069605    4457 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 18:40:45.069615    4457 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 18:40:45.069711    4457 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 18:40:45.069900    4457 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 18:40:45.069906    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /etc/ssl/certs/15332.pem
	I0728 18:40:45.070111    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 18:40:45.077222    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:40:45.097606    4457 start.go:296] duration metric: took 62.254158ms for postStartSetup
	I0728 18:40:45.097632    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetConfigRaw
	I0728 18:40:45.098242    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:40:45.098370    4457 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:40:45.098726    4457 start.go:128] duration metric: took 16.924283943s to createHost
	I0728 18:40:45.098741    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:45.098832    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:45.098919    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:45.099003    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:45.099077    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:45.099185    4457 main.go:141] libmachine: Using SSH client type: native
	I0728 18:40:45.099306    4457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x59500c0] 0x5952e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:40:45.099313    4457 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0728 18:40:45.147578    4457 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722217244.801768538
	
	I0728 18:40:45.147591    4457 fix.go:216] guest clock: 1722217244.801768538
	I0728 18:40:45.147596    4457 fix.go:229] Guest: 2024-07-28 18:40:44.801768538 -0700 PDT Remote: 2024-07-28 18:40:45.098735 -0700 PDT m=+82.457808845 (delta=-296.966462ms)
	I0728 18:40:45.147607    4457 fix.go:200] guest clock delta is within tolerance: -296.966462ms
	I0728 18:40:45.147611    4457 start.go:83] releasing machines lock for "multinode-362000-m02", held for 16.973286936s
	I0728 18:40:45.147628    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:45.147756    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:40:45.173962    4457 out.go:177] * Found network options:
	I0728 18:40:45.204336    4457 out.go:177]   - NO_PROXY=192.169.0.13
	W0728 18:40:45.229219    4457 proxy.go:119] fail to check proxy env: Error ip not in block
	I0728 18:40:45.229269    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:45.230244    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:45.230522    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:40:45.230627    4457 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 18:40:45.230663    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	W0728 18:40:45.230752    4457 proxy.go:119] fail to check proxy env: Error ip not in block
	I0728 18:40:45.230852    4457 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0728 18:40:45.230873    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:40:45.230901    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:45.231129    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:40:45.231167    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:45.231351    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:40:45.231375    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:45.231540    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:40:45.231579    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:40:45.231713    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:40:45.266285    4457 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0728 18:40:45.266505    4457 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 18:40:45.266561    4457 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 18:40:45.314306    4457 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0728 18:40:45.314769    4457 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0728 18:40:45.314791    4457 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 18:40:45.314800    4457 start.go:495] detecting cgroup driver to use...
	I0728 18:40:45.314867    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:40:45.330493    4457 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0728 18:40:45.330785    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 18:40:45.338853    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 18:40:45.347025    4457 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 18:40:45.347070    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 18:40:45.355439    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:40:45.363602    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 18:40:45.371577    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:40:45.380880    4457 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 18:40:45.389435    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 18:40:45.397472    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 18:40:45.405641    4457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 18:40:45.413729    4457 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 18:40:45.421006    4457 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0728 18:40:45.421160    4457 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 18:40:45.429796    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:40:45.518123    4457 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 18:40:45.537530    4457 start.go:495] detecting cgroup driver to use...
	I0728 18:40:45.537594    4457 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 18:40:45.549102    4457 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0728 18:40:45.549409    4457 command_runner.go:130] > [Unit]
	I0728 18:40:45.549419    4457 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 18:40:45.549424    4457 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 18:40:45.549429    4457 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0728 18:40:45.549434    4457 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0728 18:40:45.549438    4457 command_runner.go:130] > StartLimitBurst=3
	I0728 18:40:45.549442    4457 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 18:40:45.549445    4457 command_runner.go:130] > [Service]
	I0728 18:40:45.549449    4457 command_runner.go:130] > Type=notify
	I0728 18:40:45.549452    4457 command_runner.go:130] > Restart=on-failure
	I0728 18:40:45.549457    4457 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0728 18:40:45.549462    4457 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 18:40:45.549472    4457 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 18:40:45.549479    4457 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 18:40:45.549487    4457 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 18:40:45.549493    4457 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 18:40:45.549499    4457 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 18:40:45.549506    4457 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 18:40:45.549516    4457 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 18:40:45.549522    4457 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 18:40:45.549526    4457 command_runner.go:130] > ExecStart=
	I0728 18:40:45.549540    4457 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0728 18:40:45.549545    4457 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 18:40:45.549551    4457 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 18:40:45.549557    4457 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 18:40:45.549560    4457 command_runner.go:130] > LimitNOFILE=infinity
	I0728 18:40:45.549564    4457 command_runner.go:130] > LimitNPROC=infinity
	I0728 18:40:45.549567    4457 command_runner.go:130] > LimitCORE=infinity
	I0728 18:40:45.549572    4457 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 18:40:45.549576    4457 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 18:40:45.549580    4457 command_runner.go:130] > TasksMax=infinity
	I0728 18:40:45.549585    4457 command_runner.go:130] > TimeoutStartSec=0
	I0728 18:40:45.549590    4457 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 18:40:45.549594    4457 command_runner.go:130] > Delegate=yes
	I0728 18:40:45.549598    4457 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 18:40:45.549606    4457 command_runner.go:130] > KillMode=process
	I0728 18:40:45.549610    4457 command_runner.go:130] > [Install]
	I0728 18:40:45.549614    4457 command_runner.go:130] > WantedBy=multi-user.target
	I0728 18:40:45.549801    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:40:45.565724    4457 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 18:40:45.583324    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:40:45.593605    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:40:45.603827    4457 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 18:40:45.641120    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:40:45.651501    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:40:45.666232    4457 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 18:40:45.666521    4457 ssh_runner.go:195] Run: which cri-dockerd
	I0728 18:40:45.669466    4457 command_runner.go:130] > /usr/bin/cri-dockerd
	I0728 18:40:45.669624    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 18:40:45.676791    4457 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 18:40:45.691034    4457 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 18:40:45.784895    4457 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 18:40:45.882151    4457 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 18:40:45.882175    4457 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 18:40:45.896100    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:40:45.990118    4457 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:40:48.297597    4457 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.30750466s)
	I0728 18:40:48.297663    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0728 18:40:48.308016    4457 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0728 18:40:48.321063    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:40:48.331739    4457 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0728 18:40:48.422195    4457 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 18:40:48.531384    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:40:48.639310    4457 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0728 18:40:48.653793    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:40:48.664199    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:40:48.761525    4457 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0728 18:40:48.826095    4457 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 18:40:48.826173    4457 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 18:40:48.830343    4457 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0728 18:40:48.830369    4457 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0728 18:40:48.830376    4457 command_runner.go:130] > Device: 0,22	Inode: 818         Links: 1
	I0728 18:40:48.830384    4457 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0728 18:40:48.830389    4457 command_runner.go:130] > Access: 2024-07-29 01:40:48.429154603 +0000
	I0728 18:40:48.830394    4457 command_runner.go:130] > Modify: 2024-07-29 01:40:48.429154603 +0000
	I0728 18:40:48.830399    4457 command_runner.go:130] > Change: 2024-07-29 01:40:48.432154602 +0000
	I0728 18:40:48.830405    4457 command_runner.go:130] >  Birth: -
	I0728 18:40:48.830443    4457 start.go:563] Will wait 60s for crictl version
	I0728 18:40:48.830507    4457 ssh_runner.go:195] Run: which crictl
	I0728 18:40:48.833509    4457 command_runner.go:130] > /usr/bin/crictl
	I0728 18:40:48.833587    4457 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0728 18:40:48.859242    4457 command_runner.go:130] > Version:  0.1.0
	I0728 18:40:48.859256    4457 command_runner.go:130] > RuntimeName:  docker
	I0728 18:40:48.859292    4457 command_runner.go:130] > RuntimeVersion:  27.1.0
	I0728 18:40:48.859335    4457 command_runner.go:130] > RuntimeApiVersion:  v1
	I0728 18:40:48.860541    4457 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0728 18:40:48.860603    4457 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:40:48.877040    4457 command_runner.go:130] > 27.1.0
	I0728 18:40:48.877909    4457 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:40:48.899005    4457 command_runner.go:130] > 27.1.0
	I0728 18:40:48.923026    4457 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0728 18:40:48.944909    4457 out.go:177]   - env NO_PROXY=192.169.0.13
	I0728 18:40:48.970973    4457 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:40:48.971189    4457 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0728 18:40:48.974398    4457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:40:48.983839    4457 mustload.go:65] Loading cluster: multinode-362000
	I0728 18:40:48.983985    4457 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:40:48.984226    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:48.984242    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:48.993127    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52577
	I0728 18:40:48.993494    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:48.993872    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:48.993890    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:48.994125    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:48.994249    4457 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:40:48.994332    4457 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:40:48.994420    4457 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:40:48.995350    4457 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:40:48.995612    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:48.995629    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:49.004619    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52579
	I0728 18:40:49.004963    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:49.005291    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:49.005303    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:49.005517    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:49.005628    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:40:49.005731    4457 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000 for IP: 192.169.0.14
	I0728 18:40:49.005737    4457 certs.go:194] generating shared ca certs ...
	I0728 18:40:49.005755    4457 certs.go:226] acquiring lock for ca certs: {Name:mk64aac07da96a39ae6165406ad142fbce2d0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:40:49.005928    4457 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key
	I0728 18:40:49.006014    4457 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key
	I0728 18:40:49.006024    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0728 18:40:49.006050    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0728 18:40:49.006068    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0728 18:40:49.006086    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0728 18:40:49.006170    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem (1338 bytes)
	W0728 18:40:49.006221    4457 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533_empty.pem, impossibly tiny 0 bytes
	I0728 18:40:49.006231    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem (1675 bytes)
	I0728 18:40:49.006266    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem (1078 bytes)
	I0728 18:40:49.006297    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem (1123 bytes)
	I0728 18:40:49.006332    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem (1679 bytes)
	I0728 18:40:49.006404    4457 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:40:49.006442    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /usr/share/ca-certificates/15332.pem
	I0728 18:40:49.006467    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:40:49.006485    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem -> /usr/share/ca-certificates/1533.pem
	I0728 18:40:49.006509    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 18:40:49.026572    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 18:40:49.046453    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 18:40:49.065085    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 18:40:49.084898    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /usr/share/ca-certificates/15332.pem (1708 bytes)
	I0728 18:40:49.105463    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 18:40:49.125140    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem --> /usr/share/ca-certificates/1533.pem (1338 bytes)
	I0728 18:40:49.145922    4457 ssh_runner.go:195] Run: openssl version
	I0728 18:40:49.150071    4457 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0728 18:40:49.150248    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15332.pem && ln -fs /usr/share/ca-certificates/15332.pem /etc/ssl/certs/15332.pem"
	I0728 18:40:49.158617    4457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15332.pem
	I0728 18:40:49.161912    4457 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 00:57 /usr/share/ca-certificates/15332.pem
	I0728 18:40:49.162020    4457 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:57 /usr/share/ca-certificates/15332.pem
	I0728 18:40:49.162068    4457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15332.pem
	I0728 18:40:49.166203    4457 command_runner.go:130] > 3ec20f2e
	I0728 18:40:49.166303    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15332.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 18:40:49.174625    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 18:40:49.182878    4457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:40:49.186204    4457 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:40:49.186308    4457 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:40:49.186343    4457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:40:49.190766    4457 command_runner.go:130] > b5213941
	I0728 18:40:49.191001    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 18:40:49.200173    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1533.pem && ln -fs /usr/share/ca-certificates/1533.pem /etc/ssl/certs/1533.pem"
	I0728 18:40:49.208601    4457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1533.pem
	I0728 18:40:49.211883    4457 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 00:57 /usr/share/ca-certificates/1533.pem
	I0728 18:40:49.211977    4457 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:57 /usr/share/ca-certificates/1533.pem
	I0728 18:40:49.212026    4457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1533.pem
	I0728 18:40:49.216284    4457 command_runner.go:130] > 51391683
	I0728 18:40:49.216335    4457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1533.pem /etc/ssl/certs/51391683.0"
	I0728 18:40:49.224683    4457 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0728 18:40:49.227840    4457 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0728 18:40:49.227865    4457 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0728 18:40:49.227898    4457 kubeadm.go:934] updating node {m02 192.169.0.14 8443 v1.30.3 docker false true} ...
	I0728 18:40:49.227961    4457 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-362000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0728 18:40:49.228003    4457 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0728 18:40:49.235367    4457 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	I0728 18:40:49.235443    4457 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0728 18:40:49.235482    4457 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0728 18:40:49.243649    4457 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0728 18:40:49.243649    4457 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0728 18:40:49.243652    4457 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0728 18:40:49.243669    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0728 18:40:49.243672    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0728 18:40:49.243709    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:40:49.243759    4457 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0728 18:40:49.243759    4457 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0728 18:40:49.247026    4457 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0728 18:40:49.247047    4457 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0728 18:40:49.247063    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0728 18:40:49.257978    4457 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0728 18:40:49.276725    4457 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0728 18:40:49.276766    4457 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0728 18:40:49.276772    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0728 18:40:49.276916    4457 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0728 18:40:49.298878    4457 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0728 18:40:49.298902    4457 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0728 18:40:49.298938    4457 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0728 18:40:49.898411    4457 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0728 18:40:49.906559    4457 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0728 18:40:49.921120    4457 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 18:40:49.934769    4457 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0728 18:40:49.937710    4457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:40:49.947952    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:40:50.047907    4457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:40:50.064915    4457 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:40:50.065204    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:40:50.065229    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:40:50.074077    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52581
	I0728 18:40:50.074432    4457 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:40:50.074769    4457 main.go:141] libmachine: Using API Version  1
	I0728 18:40:50.074781    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:40:50.074969    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:40:50.075085    4457 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:40:50.075169    4457 start.go:317] joinCluster: &{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:40:50.075246    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0728 18:40:50.075261    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:40:50.075347    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:40:50.075454    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:40:50.075546    4457 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:40:50.075649    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:40:50.156498    4457 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 1hcgh4.wmxieotdzhetb15n --discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 
	I0728 18:40:50.159134    4457 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 18:40:50.159180    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1hcgh4.wmxieotdzhetb15n --discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-362000-m02"
	I0728 18:40:50.190727    4457 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 18:40:50.278320    4457 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0728 18:40:50.278342    4457 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0728 18:40:50.308652    4457 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0728 18:40:50.308668    4457 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0728 18:40:50.308672    4457 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0728 18:40:50.412127    4457 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0728 18:40:50.913045    4457 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.388166ms
	I0728 18:40:50.913059    4457 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0728 18:40:51.428559    4457 command_runner.go:130] > This node has joined the cluster:
	I0728 18:40:51.428576    4457 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0728 18:40:51.428582    4457 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0728 18:40:51.428588    4457 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0728 18:40:51.429534    4457 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0728 18:40:51.429564    4457 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1hcgh4.wmxieotdzhetb15n --discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-362000-m02": (1.270389395s)
	I0728 18:40:51.429591    4457 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0728 18:40:51.536688    4457 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0728 18:40:51.642016    4457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-362000-m02 minikube.k8s.io/updated_at=2024_07_28T18_40_51_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=multinode-362000 minikube.k8s.io/primary=false
	I0728 18:40:51.708434    4457 command_runner.go:130] > node/multinode-362000-m02 labeled
	I0728 18:40:51.708583    4457 start.go:319] duration metric: took 1.63344562s to joinCluster
	I0728 18:40:51.708631    4457 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 18:40:51.708804    4457 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:40:51.763047    4457 out.go:177] * Verifying Kubernetes components...
	I0728 18:40:51.784394    4457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:40:51.902394    4457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:40:51.914926    4457 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:40:51.915161    4457 kapi.go:59] client config for multinode-362000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6df5b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:40:51.915420    4457 node_ready.go:35] waiting up to 6m0s for node "multinode-362000-m02" to be "Ready" ...
	I0728 18:40:51.915463    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:51.915468    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:51.915477    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:51.915481    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:51.917354    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:51.917365    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:51.917372    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:51.917377    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:51.917381    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:51.917384    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:51.917387    4457 round_trippers.go:580]     Content-Length: 3978
	I0728 18:40:51.917397    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:52 GMT
	I0728 18:40:51.917400    4457 round_trippers.go:580]     Audit-Id: d12b139f-80ae-4718-a058-1ec650ed124a
	I0728 18:40:51.917457    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"470","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2954 chars]
	I0728 18:40:52.417647    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:52.417675    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:52.417687    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:52.417693    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:52.420314    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:52.420330    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:52.420338    4457 round_trippers.go:580]     Audit-Id: b383f664-c954-44fd-872c-d26e05a063a4
	I0728 18:40:52.420341    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:52.420346    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:52.420349    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:52.420354    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:52.420357    4457 round_trippers.go:580]     Content-Length: 3978
	I0728 18:40:52.420360    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:52 GMT
	I0728 18:40:52.420429    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"470","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2954 chars]
	I0728 18:40:52.916776    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:52.916808    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:52.916822    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:52.916831    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:52.919448    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:52.919463    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:52.919471    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:52.919475    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:52.919480    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:52.919484    4457 round_trippers.go:580]     Content-Length: 3978
	I0728 18:40:52.919493    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:53 GMT
	I0728 18:40:52.919499    4457 round_trippers.go:580]     Audit-Id: 308d3f88-d2ac-4b1d-ae60-de6330a739ae
	I0728 18:40:52.919505    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:52.919591    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"470","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2954 chars]
	I0728 18:40:53.416460    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:53.416488    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:53.416495    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:53.416499    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:53.418148    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:53.418157    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:53.418162    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:53 GMT
	I0728 18:40:53.418165    4457 round_trippers.go:580]     Audit-Id: 49d007cb-b6be-4970-b4c1-0ea39d37b196
	I0728 18:40:53.418169    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:53.418171    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:53.418174    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:53.418177    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:53.418179    4457 round_trippers.go:580]     Content-Length: 3978
	I0728 18:40:53.418243    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"470","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2954 chars]
	I0728 18:40:53.916759    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:53.916789    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:53.916841    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:53.916851    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:53.919633    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:53.919651    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:53.919659    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:53.919668    4457 round_trippers.go:580]     Content-Length: 3978
	I0728 18:40:53.919673    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:54 GMT
	I0728 18:40:53.919679    4457 round_trippers.go:580]     Audit-Id: eecf0534-e934-4c67-8661-ae888824bd41
	I0728 18:40:53.919685    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:53.919699    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:53.919705    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:53.919777    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"470","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2954 chars]
	I0728 18:40:53.919962    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:40:54.415741    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:54.415767    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:54.415778    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:54.415783    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:54.418337    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:54.418353    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:54.418362    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:54 GMT
	I0728 18:40:54.418379    4457 round_trippers.go:580]     Audit-Id: eaa3ad15-28e0-4470-9088-01c7917f0353
	I0728 18:40:54.418388    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:54.418393    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:54.418402    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:54.418407    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:54.418411    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:54.418478    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:54.915568    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:54.915594    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:54.915606    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:54.915610    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:54.917942    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:54.917957    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:54.917964    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:54.917968    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:54.917972    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:55 GMT
	I0728 18:40:54.917975    4457 round_trippers.go:580]     Audit-Id: 6660786f-3d59-43ee-b51b-7f2426f7d62f
	I0728 18:40:54.917979    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:54.917983    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:54.917986    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:54.918139    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:55.416396    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:55.416418    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:55.416425    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:55.416431    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:55.418358    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:55.418371    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:55.418378    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:55.418383    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:55 GMT
	I0728 18:40:55.418387    4457 round_trippers.go:580]     Audit-Id: 5ae9f345-b9bd-453f-b77f-c2370d002e68
	I0728 18:40:55.418391    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:55.418406    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:55.418411    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:55.418414    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:55.418466    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:55.915660    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:55.915675    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:55.915723    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:55.915727    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:55.917527    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:55.917538    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:55.917544    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:55.917548    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:56 GMT
	I0728 18:40:55.917551    4457 round_trippers.go:580]     Audit-Id: 082582db-68cc-4bac-add8-46564d4ab3d3
	I0728 18:40:55.917553    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:55.917556    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:55.917558    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:55.917561    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:55.917619    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:56.415461    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:56.415478    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:56.415485    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:56.415489    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:56.417302    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:56.417312    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:56.417318    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:56.417342    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:56.417349    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:56.417352    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:56.417355    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:56 GMT
	I0728 18:40:56.417358    4457 round_trippers.go:580]     Audit-Id: 63d664b3-59f4-4874-82f2-c8b6d40c8bee
	I0728 18:40:56.417360    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:56.417418    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:56.417566    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:40:56.916438    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:56.916455    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:56.916501    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:56.916506    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:56.917979    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:56.917989    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:56.917995    4457 round_trippers.go:580]     Audit-Id: ef6c2ccf-0d71-43c6-845f-6d98899f4eb5
	I0728 18:40:56.917999    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:56.918003    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:56.918011    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:56.918015    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:56.918024    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:56.918027    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:57 GMT
	I0728 18:40:56.918079    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:57.415969    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:57.415987    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:57.415995    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:57.415999    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:57.417727    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:57.417738    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:57.417744    4457 round_trippers.go:580]     Audit-Id: bca505a7-a0b6-4bc3-9b71-1541345897ab
	I0728 18:40:57.417752    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:57.417756    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:57.417758    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:57.417761    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:57.417764    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:57.417767    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:57 GMT
	I0728 18:40:57.417816    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:57.916250    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:57.916274    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:57.916281    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:57.916285    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:57.918051    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:57.918069    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:57.918079    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:58 GMT
	I0728 18:40:57.918085    4457 round_trippers.go:580]     Audit-Id: 1a4f1da9-1dab-4ac4-baf1-c1234fc0ff36
	I0728 18:40:57.918091    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:57.918101    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:57.918108    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:57.918114    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:57.918117    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:57.918234    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:58.415857    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:58.415905    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:58.415915    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:58.415920    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:58.417570    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:58.417586    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:58.417598    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:58 GMT
	I0728 18:40:58.417609    4457 round_trippers.go:580]     Audit-Id: 6236d266-a312-4cfb-be28-cd41c4b6a7d0
	I0728 18:40:58.417620    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:58.417627    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:58.417631    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:58.417635    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:58.417638    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:58.417691    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:58.417845    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:40:58.916828    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:58.916843    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:58.916850    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:58.916854    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:58.918226    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:58.918237    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:58.918247    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:58.918251    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:58.918256    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:58.918258    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:59 GMT
	I0728 18:40:58.918261    4457 round_trippers.go:580]     Audit-Id: 53860841-ae52-4770-b024-2915b5fd1f6f
	I0728 18:40:58.918263    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:58.918266    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:58.918363    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:59.415563    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:59.415594    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:59.415629    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:59.415638    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:59.418139    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:40:59.418157    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:59.418165    4457 round_trippers.go:580]     Audit-Id: f6fd96da-6d0a-48a8-92fa-0cfce8390021
	I0728 18:40:59.418169    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:59.418175    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:59.418179    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:59.418182    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:59.418186    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:59.418199    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:40:59 GMT
	I0728 18:40:59.418260    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:40:59.916342    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:40:59.916358    4457 round_trippers.go:469] Request Headers:
	I0728 18:40:59.916365    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:40:59.916368    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:40:59.917948    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:40:59.917958    4457 round_trippers.go:577] Response Headers:
	I0728 18:40:59.917964    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:40:59.917968    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:40:59.917970    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:40:59.917972    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:00 GMT
	I0728 18:40:59.917975    4457 round_trippers.go:580]     Audit-Id: 8e49f51d-a92f-488b-a20f-2634a5a0cb1f
	I0728 18:40:59.917978    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:40:59.917990    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:40:59.918029    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:41:00.416214    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:00.416243    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:00.416253    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:00.416258    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:00.417937    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:00.417961    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:00.417968    4457 round_trippers.go:580]     Audit-Id: 523b0305-eeac-4ae7-81de-a80936ca2113
	I0728 18:41:00.417974    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:00.417980    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:00.417991    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:00.418012    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:00.418019    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:41:00.418022    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:00 GMT
	I0728 18:41:00.418079    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:41:00.418265    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:41:00.916225    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:00.916255    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:00.916263    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:00.916271    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:00.917771    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:00.917783    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:00.917788    4457 round_trippers.go:580]     Content-Length: 4087
	I0728 18:41:00.917792    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:01 GMT
	I0728 18:41:00.917795    4457 round_trippers.go:580]     Audit-Id: 09d72ceb-c858-49fe-9bb1-3f38f9aa7cf7
	I0728 18:41:00.917798    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:00.917801    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:00.917803    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:00.917805    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:00.917855    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"476","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0728 18:41:01.415618    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:01.415635    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:01.415686    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:01.415690    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:01.417196    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:01.417211    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:01.417217    4457 round_trippers.go:580]     Audit-Id: a9841fa3-7ff8-4139-82f8-69daa0bba949
	I0728 18:41:01.417221    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:01.417223    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:01.417226    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:01.417230    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:01.417232    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:01 GMT
	I0728 18:41:01.417300    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:01.916477    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:01.916513    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:01.916525    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:01.916533    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:01.919024    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:01.919047    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:01.919054    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:01.919058    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:01.919091    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:01.919098    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:02 GMT
	I0728 18:41:01.919102    4457 round_trippers.go:580]     Audit-Id: cd9eb789-5513-4485-a4a2-c02a70f6ff9b
	I0728 18:41:01.919107    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:01.919319    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:02.416280    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:02.416308    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:02.416318    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:02.416325    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:02.419423    4457 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:41:02.419439    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:02.419446    4457 round_trippers.go:580]     Audit-Id: 77d6588a-a0c9-49ad-9bea-f4995beaa0a4
	I0728 18:41:02.419452    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:02.419456    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:02.419459    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:02.419472    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:02.419483    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:02 GMT
	I0728 18:41:02.419572    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:02.419792    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:41:02.915682    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:02.915708    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:02.915720    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:02.915725    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:02.918412    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:02.918431    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:02.918438    4457 round_trippers.go:580]     Audit-Id: cae8137c-c9ed-4122-9d64-6b2d98249f2f
	I0728 18:41:02.918442    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:02.918446    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:02.918451    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:02.918454    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:02.918458    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:03 GMT
	I0728 18:41:02.918549    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:03.416917    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:03.416947    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:03.417001    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:03.417014    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:03.419558    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:03.419572    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:03.419579    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:03.419584    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:03 GMT
	I0728 18:41:03.419587    4457 round_trippers.go:580]     Audit-Id: fcee71fb-b532-4412-b39b-c31e1c6abbeb
	I0728 18:41:03.419590    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:03.419595    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:03.419600    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:03.419684    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:03.916767    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:03.916795    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:03.916806    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:03.916813    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:03.919492    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:03.919511    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:03.919519    4457 round_trippers.go:580]     Audit-Id: 6202dce9-fba1-4c3b-8a72-4afb52afb468
	I0728 18:41:03.919523    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:03.919554    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:03.919564    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:03.919571    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:03.919575    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:04 GMT
	I0728 18:41:03.919682    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:04.415878    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:04.415906    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:04.415917    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:04.415925    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:04.418626    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:04.418647    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:04.418655    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:04.418661    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:04.418665    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:04.418668    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:04.418674    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:04 GMT
	I0728 18:41:04.418677    4457 round_trippers.go:580]     Audit-Id: cd1cfba9-fce7-49de-9e8f-26cb52d34aa6
	I0728 18:41:04.418747    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:04.916227    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:04.916249    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:04.916257    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:04.916263    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:04.918262    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:04.918276    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:04.918284    4457 round_trippers.go:580]     Audit-Id: 651f1d5c-8b97-4bb6-8469-b801c0e4c7da
	I0728 18:41:04.918290    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:04.918294    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:04.918300    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:04.918303    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:04.918310    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:05 GMT
	I0728 18:41:04.918394    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:04.918613    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:41:05.415597    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:05.415623    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:05.415635    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:05.415642    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:05.418163    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:05.418178    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:05.418196    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:05.418201    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:05.418240    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:05 GMT
	I0728 18:41:05.418249    4457 round_trippers.go:580]     Audit-Id: 805a3ed5-4c00-42f1-bcdd-41bc48301f81
	I0728 18:41:05.418252    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:05.418256    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:05.418442    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:05.916769    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:05.916827    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:05.916846    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:05.916856    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:05.919019    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:05.919032    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:05.919039    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:05.919043    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:05.919048    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:06 GMT
	I0728 18:41:05.919052    4457 round_trippers.go:580]     Audit-Id: 52130e5d-ffde-45a4-a971-eab6d44a0ed6
	I0728 18:41:05.919056    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:05.919078    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:05.919315    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:06.416652    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:06.416678    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:06.416690    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:06.416698    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:06.419246    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:06.419261    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:06.419268    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:06 GMT
	I0728 18:41:06.419273    4457 round_trippers.go:580]     Audit-Id: 00266202-acdd-48c9-815e-6fb223f16957
	I0728 18:41:06.419276    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:06.419280    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:06.419312    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:06.419333    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:06.419551    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:06.916999    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:06.917028    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:06.917039    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:06.917047    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:06.919459    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:06.919474    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:06.919481    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:07 GMT
	I0728 18:41:06.919486    4457 round_trippers.go:580]     Audit-Id: 20a92b14-dd96-4db8-940d-85d9c0a2a810
	I0728 18:41:06.919491    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:06.919494    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:06.919498    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:06.919501    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:06.919713    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:06.919938    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:41:07.416877    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:07.416904    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:07.416915    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:07.416929    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:07.419685    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:07.419699    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:07.419706    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:07.419732    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:07.419740    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:07.419744    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:07 GMT
	I0728 18:41:07.419748    4457 round_trippers.go:580]     Audit-Id: 1966e958-252f-4ca1-874e-729a3892c519
	I0728 18:41:07.419751    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:07.419885    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:07.915407    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:07.915434    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:07.915445    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:07.915452    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:07.918193    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:07.918207    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:07.918214    4457 round_trippers.go:580]     Audit-Id: 7b6f4c86-e2f2-499e-a6e9-5005f194f3af
	I0728 18:41:07.918227    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:07.918233    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:07.918238    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:07.918246    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:07.918249    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:08 GMT
	I0728 18:41:07.918526    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:08.417366    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:08.417392    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:08.417403    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:08.417409    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:08.419926    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:08.419943    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:08.419951    4457 round_trippers.go:580]     Audit-Id: ee3ce7eb-aaaa-4ee2-bb27-a193c0eeeed4
	I0728 18:41:08.419955    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:08.419960    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:08.419965    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:08.419969    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:08.419973    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:08 GMT
	I0728 18:41:08.420197    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:08.916465    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:08.916488    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:08.916500    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:08.916506    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:08.918896    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:08.918913    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:08.918920    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:08.918925    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:08.918929    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:08.918932    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:08.918936    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:09 GMT
	I0728 18:41:08.918941    4457 round_trippers.go:580]     Audit-Id: 58f30bdd-4b54-4b5d-a953-7f188f39b30b
	I0728 18:41:08.919050    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:09.416532    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:09.416554    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:09.416566    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:09.416574    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:09.418853    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:09.418868    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:09.418875    4457 round_trippers.go:580]     Audit-Id: c3eac4a3-9b45-45db-a1fb-25fda462c318
	I0728 18:41:09.418880    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:09.418910    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:09.418915    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:09.418921    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:09.418925    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:09 GMT
	I0728 18:41:09.419024    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:09.419238    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:41:09.916622    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:09.916642    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:09.916712    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:09.916721    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:09.918446    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:09.918461    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:09.918468    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:09.918472    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:09.918494    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:10 GMT
	I0728 18:41:09.918500    4457 round_trippers.go:580]     Audit-Id: c7c8034a-1726-4389-9683-dda851a06a30
	I0728 18:41:09.918503    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:09.918508    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:09.918659    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:10.415621    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:10.415650    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:10.415661    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:10.415668    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:10.418352    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:10.418367    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:10.418374    4457 round_trippers.go:580]     Audit-Id: 8ba843fa-a401-43c0-b5bf-cfb675340351
	I0728 18:41:10.418378    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:10.418385    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:10.418389    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:10.418394    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:10.418397    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:10 GMT
	I0728 18:41:10.418497    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:10.916408    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:10.916433    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:10.916523    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:10.916533    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:10.918927    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:10.918939    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:10.918946    4457 round_trippers.go:580]     Audit-Id: 5758486e-fdad-4bd6-b190-2d6279a70cce
	I0728 18:41:10.918951    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:10.918955    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:10.918958    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:10.918972    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:10.918977    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:11 GMT
	I0728 18:41:10.919210    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:11.415374    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:11.415398    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:11.415410    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:11.415418    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:11.418115    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:11.418129    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:11.418136    4457 round_trippers.go:580]     Audit-Id: 4090ba38-eff8-421a-91e6-81cd59f054a1
	I0728 18:41:11.418141    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:11.418148    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:11.418153    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:11.418157    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:11.418161    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:11 GMT
	I0728 18:41:11.418419    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:11.915591    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:11.915615    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:11.915627    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:11.915633    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:11.918225    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:11.918246    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:11.918257    4457 round_trippers.go:580]     Audit-Id: 6ca26a0d-28ad-4b81-b9af-5f08435c100b
	I0728 18:41:11.918265    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:11.918273    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:11.918276    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:11.918281    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:11.918297    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:12 GMT
	I0728 18:41:11.918527    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:11.918742    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:41:12.415206    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:12.415218    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:12.415225    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:12.415229    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:12.416645    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:12.416654    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:12.416659    4457 round_trippers.go:580]     Audit-Id: c3eaaf8c-b3c5-46b6-a7da-cb7674fd9ce2
	I0728 18:41:12.416663    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:12.416666    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:12.416670    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:12.416673    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:12.416675    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:12 GMT
	I0728 18:41:12.416729    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:12.915496    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:12.915521    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:12.915533    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:12.915537    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:12.918435    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:12.918450    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:12.918457    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:12.918461    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:12.918466    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:12.918470    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:13 GMT
	I0728 18:41:12.918473    4457 round_trippers.go:580]     Audit-Id: 8dc254ff-923f-4776-a405-351862f2b98e
	I0728 18:41:12.918477    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:12.918591    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:13.415401    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:13.415440    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:13.415455    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:13.415535    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:13.418180    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:13.418194    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:13.418202    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:13.418206    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:13 GMT
	I0728 18:41:13.418209    4457 round_trippers.go:580]     Audit-Id: f3f9fc94-e5f5-4700-92c4-a321937068ce
	I0728 18:41:13.418213    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:13.418215    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:13.418219    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:13.418288    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:13.915929    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:13.915952    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:13.915965    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:13.915971    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:13.918663    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:13.918683    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:13.918724    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:13.918731    4457 round_trippers.go:580]     Audit-Id: 43340662-41c2-429d-8f3f-e2324c253607
	I0728 18:41:13.918734    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:13.918738    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:13.918742    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:13.918746    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:13.918826    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"491","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0728 18:41:13.919039    4457 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:41:14.415323    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:14.415347    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.415359    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.415365    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.418009    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:14.418026    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.418033    4457 round_trippers.go:580]     Audit-Id: 6d7c9f3f-d214-4077-bcc5-358f585101d4
	I0728 18:41:14.418040    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.418044    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.418049    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.418053    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.418057    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.418157    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"512","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3142 chars]
	I0728 18:41:14.418363    4457 node_ready.go:49] node "multinode-362000-m02" has status "Ready":"True"
	I0728 18:41:14.418380    4457 node_ready.go:38] duration metric: took 22.503390694s for node "multinode-362000-m02" to be "Ready" ...
	I0728 18:41:14.418388    4457 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:41:14.418432    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:41:14.418438    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.418445    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.418451    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.423078    4457 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 18:41:14.423088    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.423093    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.423095    4457 round_trippers.go:580]     Audit-Id: cf39922b-fddb-47d2-9d4f-545639471fc8
	I0728 18:41:14.423098    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.423100    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.423103    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.423105    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.423726    4457 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"512"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"416","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70370 chars]
	I0728 18:41:14.425296    4457 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.425340    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:41:14.425345    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.425351    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.425353    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.426799    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:14.426810    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.426817    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.426820    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.426823    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.426825    4457 round_trippers.go:580]     Audit-Id: c27b7ab2-38db-48f7-ba5b-f2602d93b372
	I0728 18:41:14.426828    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.426831    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.426939    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"416","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0728 18:41:14.427189    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:41:14.427195    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.427201    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.427204    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.428431    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:14.428439    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.428444    4457 round_trippers.go:580]     Audit-Id: c72c4ca1-1fb0-456a-9f22-745a31a724ba
	I0728 18:41:14.428446    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.428449    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.428452    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.428454    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.428458    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.428521    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0728 18:41:14.428683    4457 pod_ready.go:92] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"True"
	I0728 18:41:14.428692    4457 pod_ready.go:81] duration metric: took 3.385887ms for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.428698    4457 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.428731    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-362000
	I0728 18:41:14.428737    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.428742    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.428745    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.429730    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:41:14.429740    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.429745    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.429749    4457 round_trippers.go:580]     Audit-Id: 51bde7d2-1fa7-405e-8e23-295a5099bc1f
	I0728 18:41:14.429752    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.429755    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.429758    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.429761    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.429824    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-362000","namespace":"kube-system","uid":"7b75e781-36f1-4f6f-99a4-808974571bcd","resourceVersion":"337","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.mirror":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.seen":"2024-07-29T01:39:56.230156002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0728 18:41:14.430059    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:41:14.430066    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.430071    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.430074    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.431039    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:41:14.431048    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.431056    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.431060    4457 round_trippers.go:580]     Audit-Id: 252cc3c9-6dd1-4613-b8ec-19f32b1bf0bb
	I0728 18:41:14.431080    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.431086    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.431089    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.431091    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.431297    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0728 18:41:14.431456    4457 pod_ready.go:92] pod "etcd-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:41:14.431467    4457 pod_ready.go:81] duration metric: took 2.761563ms for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.431477    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.431509    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-362000
	I0728 18:41:14.431514    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.431520    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.431523    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.432486    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:41:14.432494    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.432500    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.432506    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.432512    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.432518    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.432524    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.432529    4457 round_trippers.go:580]     Audit-Id: d3984880-f623-4a7f-8c2d-3d8575b6c911
	I0728 18:41:14.432690    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-362000","namespace":"kube-system","uid":"95b0fc9b-aad1-47ad-ae00-439b4e4b905a","resourceVersion":"392","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.mirror":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.seen":"2024-07-29T01:39:56.230158669Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0728 18:41:14.432918    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:41:14.432925    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.432931    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.432935    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.433769    4457 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:41:14.433776    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.433781    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.433785    4457 round_trippers.go:580]     Audit-Id: 0408274a-afe6-4996-803b-0511bea524d5
	I0728 18:41:14.433789    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.433794    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.433798    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.433802    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.433912    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0728 18:41:14.434071    4457 pod_ready.go:92] pod "kube-apiserver-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:41:14.434078    4457 pod_ready.go:81] duration metric: took 2.597455ms for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.434084    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.434113    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-362000
	I0728 18:41:14.434118    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.434123    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.434127    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.435161    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:14.435169    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.435175    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.435180    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.435184    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.435188    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.435192    4457 round_trippers.go:580]     Audit-Id: a87fe60d-e026-4a21-a54d-c3d5e4ecb353
	I0728 18:41:14.435200    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.435402    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-362000","namespace":"kube-system","uid":"5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9","resourceVersion":"391","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.mirror":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.seen":"2024-07-29T01:39:56.230159555Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0728 18:41:14.435626    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:41:14.435633    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.435639    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.435643    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.436663    4457 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:41:14.436679    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.436685    4457 round_trippers.go:580]     Audit-Id: da88a2df-3fd0-46ac-9d49-67d9d6cb79de
	I0728 18:41:14.436689    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.436692    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.436695    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.436698    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.436700    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.436884    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0728 18:41:14.437038    4457 pod_ready.go:92] pod "kube-controller-manager-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:41:14.437048    4457 pod_ready.go:81] duration metric: took 2.956971ms for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.437055    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dzz6p" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.615374    4457 request.go:629] Waited for 178.266998ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzz6p
	I0728 18:41:14.615525    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzz6p
	I0728 18:41:14.615539    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.615551    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.615558    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.617847    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:14.617864    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.617874    4457 round_trippers.go:580]     Audit-Id: d4c0f16b-8a44-402d-92da-e2de9a02f0e1
	I0728 18:41:14.617881    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.617887    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.617891    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.617894    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.617913    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:14 GMT
	I0728 18:41:14.618029    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dzz6p","generateName":"kube-proxy-","namespace":"kube-system","uid":"577d6ba2-e17a-426f-8315-1688766fa435","resourceVersion":"488","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0728 18:41:14.816222    4457 request.go:629] Waited for 197.76697ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:14.816296    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:41:14.816310    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:14.816332    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:14.816344    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:14.818980    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:14.818996    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:14.819004    4457 round_trippers.go:580]     Audit-Id: 1fab62a9-f3a2-4524-bba6-7168d3c40b1c
	I0728 18:41:14.819008    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:14.819012    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:14.819016    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:14.819019    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:14.819025    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:15 GMT
	I0728 18:41:14.819107    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"512","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3142 chars]
	I0728 18:41:14.819314    4457 pod_ready.go:92] pod "kube-proxy-dzz6p" in "kube-system" namespace has status "Ready":"True"
	I0728 18:41:14.819325    4457 pod_ready.go:81] duration metric: took 382.271949ms for pod "kube-proxy-dzz6p" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:14.819337    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:15.017330    4457 request.go:629] Waited for 197.92175ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:41:15.017492    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:41:15.017513    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:15.017524    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:15.017533    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:15.020083    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:15.020101    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:15.020109    4457 round_trippers.go:580]     Audit-Id: 52d5102d-ccfc-4431-9d8c-293a1bf9e524
	I0728 18:41:15.020113    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:15.020119    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:15.020124    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:15.020128    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:15.020132    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:15 GMT
	I0728 18:41:15.020236    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tz5h5","generateName":"kube-proxy-","namespace":"kube-system","uid":"f791f783-464c-485b-9eda-97a5f857cca4","resourceVersion":"381","creationTimestamp":"2024-07-29T01:40:09Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0728 18:41:15.217317    4457 request.go:629] Waited for 196.739764ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:41:15.217503    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:41:15.217518    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:15.217537    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:15.217546    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:15.220476    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:15.220489    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:15.220496    4457 round_trippers.go:580]     Audit-Id: 2ba4e00c-940a-4166-8a8d-113da5ef2a56
	I0728 18:41:15.220502    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:15.220506    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:15.220511    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:15.220515    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:15.220519    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:15 GMT
	I0728 18:41:15.220961    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0728 18:41:15.221221    4457 pod_ready.go:92] pod "kube-proxy-tz5h5" in "kube-system" namespace has status "Ready":"True"
	I0728 18:41:15.221234    4457 pod_ready.go:81] duration metric: took 401.897714ms for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:15.221243    4457 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:15.415671    4457 request.go:629] Waited for 194.352358ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:41:15.415801    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:41:15.415812    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:15.415822    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:15.415830    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:15.418370    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:15.418388    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:15.418396    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:15 GMT
	I0728 18:41:15.418400    4457 round_trippers.go:580]     Audit-Id: a4b392a9-87eb-4d6b-a818-7b8efb7d5bba
	I0728 18:41:15.418404    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:15.418407    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:15.418410    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:15.418414    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:15.418553    4457 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-362000","namespace":"kube-system","uid":"0299d0c0-d45d-45ee-9b8e-b5900e92694b","resourceVersion":"344","creationTimestamp":"2024-07-29T01:39:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.mirror":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.seen":"2024-07-29T01:39:50.867492603Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0728 18:41:15.616023    4457 request.go:629] Waited for 197.174221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:41:15.616091    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:41:15.616175    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:15.616189    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:15.616196    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:15.618751    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:15.618767    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:15.618774    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:15.618779    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:15.618782    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:15.618786    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:15 GMT
	I0728 18:41:15.618797    4457 round_trippers.go:580]     Audit-Id: e98369b6-dca2-4e64-96fa-466b02509d28
	I0728 18:41:15.618802    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:15.618868    4457 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0728 18:41:15.619106    4457 pod_ready.go:92] pod "kube-scheduler-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:41:15.619118    4457 pod_ready.go:81] duration metric: took 397.877259ms for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:41:15.619127    4457 pod_ready.go:38] duration metric: took 1.200751144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:41:15.619147    4457 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 18:41:15.619211    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:41:15.631548    4457 system_svc.go:56] duration metric: took 12.399324ms WaitForService to wait for kubelet
	I0728 18:41:15.631564    4457 kubeadm.go:582] duration metric: took 23.923383994s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:41:15.631580    4457 node_conditions.go:102] verifying NodePressure condition ...
	I0728 18:41:15.816731    4457 request.go:629] Waited for 185.100002ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0728 18:41:15.816907    4457 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0728 18:41:15.816919    4457 round_trippers.go:469] Request Headers:
	I0728 18:41:15.816931    4457 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:41:15.816938    4457 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:41:15.819688    4457 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:41:15.819706    4457 round_trippers.go:577] Response Headers:
	I0728 18:41:15.819716    4457 round_trippers.go:580]     Audit-Id: d738bc89-2916-4cc6-a013-48e7e7ac584d
	I0728 18:41:15.819722    4457 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:41:15.819726    4457 round_trippers.go:580]     Content-Type: application/json
	I0728 18:41:15.819731    4457 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:41:15.819738    4457 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:41:15.819742    4457 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:41:16 GMT
	I0728 18:41:15.820149    4457 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"514"},"items":[{"metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"423","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9145 chars]
	I0728 18:41:15.820524    4457 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:41:15.820536    4457 node_conditions.go:123] node cpu capacity is 2
	I0728 18:41:15.820544    4457 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:41:15.820548    4457 node_conditions.go:123] node cpu capacity is 2
	I0728 18:41:15.820554    4457 node_conditions.go:105] duration metric: took 188.973464ms to run NodePressure ...
	I0728 18:41:15.820563    4457 start.go:241] waiting for startup goroutines ...
	I0728 18:41:15.820589    4457 start.go:255] writing updated cluster config ...
	I0728 18:41:15.821417    4457 ssh_runner.go:195] Run: rm -f paused
	I0728 18:41:15.861833    4457 start.go:600] kubectl: 1.29.2, cluster: 1.30.3 (minor skew: 1)
	I0728 18:41:15.937418    4457 out.go:177] * Done! kubectl is now configured to use "multinode-362000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.459296259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.459872035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.460083053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.460133525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.460257304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:40:25 multinode-362000 cri-dockerd[1171]: time="2024-07-29T01:40:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/28cbce0c6ed98e9c955fd2ad47b80253eef5c1d27aa60477f2b7c450ebe28396/resolv.conf as [nameserver 192.169.0.1]"
	Jul 29 01:40:25 multinode-362000 cri-dockerd[1171]: time="2024-07-29T01:40:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/de282e66d4c0558a185d2943edde7cc6d15f7c8e33b53206d011dc03e8998611/resolv.conf as [nameserver 192.169.0.1]"
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.627023969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.627173932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.627311883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.628582602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.666192284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.666339609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.666396957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:40:25 multinode-362000 dockerd[1280]: time="2024-07-29T01:40:25.667447445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:41:17 multinode-362000 dockerd[1280]: time="2024-07-29T01:41:17.011444643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 01:41:17 multinode-362000 dockerd[1280]: time="2024-07-29T01:41:17.011504420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 01:41:17 multinode-362000 dockerd[1280]: time="2024-07-29T01:41:17.011513820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:41:17 multinode-362000 dockerd[1280]: time="2024-07-29T01:41:17.012153566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:41:17 multinode-362000 cri-dockerd[1171]: time="2024-07-29T01:41:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9e1e93dc724260e39b5f122928824d04094fd5f45fd8acdcd5a10bf238cc3411/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 29 01:41:18 multinode-362000 cri-dockerd[1171]: time="2024-07-29T01:41:18Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 29 01:41:18 multinode-362000 dockerd[1280]: time="2024-07-29T01:41:18.469182532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 01:41:18 multinode-362000 dockerd[1280]: time="2024-07-29T01:41:18.469226256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 01:41:18 multinode-362000 dockerd[1280]: time="2024-07-29T01:41:18.469238850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:41:18 multinode-362000 dockerd[1280]: time="2024-07-29T01:41:18.469344356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fe2daed37b2f7       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   9e1e93dc72426       busybox-fc5497c4f-8hq8g
	4e01b33bc28ce       cbb01a7bd410d                                                                                         2 minutes ago        Running             coredns                   0                   de282e66d4c05       coredns-7db6d8ff4d-8npcw
	1255904b9cda9       6e38f40d628db                                                                                         2 minutes ago        Running             storage-provisioner       0                   28cbce0c6ed98       storage-provisioner
	a44317c7df722       kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a              2 minutes ago        Running             kindnet-cni               0                   a8dcd682eb598       kindnet-4mw5v
	473044afd6a20       55bb025d2cfa5                                                                                         2 minutes ago        Running             kube-proxy                0                   3050e483a8a8d       kube-proxy-tz5h5
	898c4f8b22692       76932a3b37d7e                                                                                         2 minutes ago        Running             kube-controller-manager   0                   c5e0cac22c053       kube-controller-manager-multinode-362000
	f4075b746de31       1f6d574d502f3                                                                                         2 minutes ago        Running             kube-apiserver            0                   1e7d4787a9c38       kube-apiserver-multinode-362000
	ef990ab76809a       3edc18e7b7672                                                                                         2 minutes ago        Running             kube-scheduler            0                   9bd37faa2f0ae       kube-scheduler-multinode-362000
	e54a6e4f589e1       3861cfcd7c04c                                                                                         2 minutes ago        Running             etcd                      0                   9ebd1495f3898       etcd-multinode-362000
	
	
	==> coredns [4e01b33bc28c] <==
	[INFO] 10.244.0.3:35329 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053956s
	[INFO] 10.244.1.2:42551 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000064935s
	[INFO] 10.244.1.2:37359 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000065134s
	[INFO] 10.244.1.2:58343 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075262s
	[INFO] 10.244.1.2:49050 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090366s
	[INFO] 10.244.1.2:53653 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000107571s
	[INFO] 10.244.1.2:56614 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107796s
	[INFO] 10.244.1.2:36768 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092239s
	[INFO] 10.244.1.2:47351 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105143s
	[INFO] 10.244.0.3:57350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000085706s
	[INFO] 10.244.0.3:38330 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000035689s
	[INFO] 10.244.0.3:34046 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005355s
	[INFO] 10.244.0.3:37101 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083044s
	[INFO] 10.244.1.2:35916 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149042s
	[INFO] 10.244.1.2:52331 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100403s
	[INFO] 10.244.1.2:59376 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110433s
	[INFO] 10.244.1.2:54731 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089837s
	[INFO] 10.244.0.3:55981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000054156s
	[INFO] 10.244.0.3:52651 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064795s
	[INFO] 10.244.0.3:44319 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000045378s
	[INFO] 10.244.0.3:47078 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00004451s
	[INFO] 10.244.1.2:41717 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100439s
	[INFO] 10.244.1.2:48492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113445s
	[INFO] 10.244.1.2:34934 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060259s
	[INFO] 10.244.1.2:39620 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000143004s
	
	
	==> describe nodes <==
	Name:               multinode-362000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-362000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=multinode-362000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_28T18_39_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:39:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-362000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:42:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:41:28 +0000   Mon, 29 Jul 2024 01:39:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:41:28 +0000   Mon, 29 Jul 2024 01:39:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:41:28 +0000   Mon, 29 Jul 2024 01:39:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:41:28 +0000   Mon, 29 Jul 2024 01:40:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-362000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b0deb4a701e49b1b84599ec1f9f7e3e
	  System UUID:                81224f45-0000-0000-b808-288a2b40595b
	  Boot ID:                    96400dcc-d649-4a6a-b0b3-add8d75e0274
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8hq8g                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 coredns-7db6d8ff4d-8npcw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m32s
	  kube-system                 etcd-multinode-362000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m46s
	  kube-system                 kindnet-4mw5v                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m33s
	  kube-system                 kube-apiserver-multinode-362000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 kube-controller-manager-multinode-362000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 kube-proxy-tz5h5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  kube-system                 kube-scheduler-multinode-362000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m31s  kube-proxy       
	  Normal  Starting                 2m46s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m46s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m46s  kubelet          Node multinode-362000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m46s  kubelet          Node multinode-362000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m46s  kubelet          Node multinode-362000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m33s  node-controller  Node multinode-362000 event: Registered Node multinode-362000 in Controller
	  Normal  NodeReady                2m17s  kubelet          Node multinode-362000 status is now: NodeReady
	
	
	Name:               multinode-362000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-362000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=multinode-362000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_28T18_40_51_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:40:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-362000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:42:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:41:21 +0000   Mon, 29 Jul 2024 01:40:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:41:21 +0000   Mon, 29 Jul 2024 01:40:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:41:21 +0000   Mon, 29 Jul 2024 01:40:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:41:21 +0000   Mon, 29 Jul 2024 01:41:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.14
	  Hostname:    multinode-362000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 ff3e93e24af54aa0951fd8bce080e314
	  System UUID:                80374d1a-0000-0000-bdda-22c83e05ebd1
	  Boot ID:                    79f99fe7-d394-40c3-9dc4-0519f577ae97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-svnlx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kindnet-8hhwv              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      111s
	  kube-system                 kube-proxy-dzz6p           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  112s (x2 over 112s)  kubelet          Node multinode-362000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s (x2 over 112s)  kubelet          Node multinode-362000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s (x2 over 112s)  kubelet          Node multinode-362000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  112s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           108s                 node-controller  Node multinode-362000-m02 event: Registered Node multinode-362000-m02 in Controller
	  Normal  NodeReady                89s                  kubelet          Node multinode-362000-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.320796] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.324011] systemd-fstab-generator[497]: Ignoring "noauto" option for root device
	[  +0.106079] systemd-fstab-generator[510]: Ignoring "noauto" option for root device
	[  +1.734505] systemd-fstab-generator[847]: Ignoring "noauto" option for root device
	[  +0.254464] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.101763] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +0.126307] systemd-fstab-generator[910]: Ignoring "noauto" option for root device
	[  +2.147515] kauditd_printk_skb: 161 callbacks suppressed
	[  +0.258303] systemd-fstab-generator[1124]: Ignoring "noauto" option for root device
	[  +0.101422] systemd-fstab-generator[1136]: Ignoring "noauto" option for root device
	[  +0.099797] systemd-fstab-generator[1148]: Ignoring "noauto" option for root device
	[  +0.130725] systemd-fstab-generator[1163]: Ignoring "noauto" option for root device
	[  +3.696971] systemd-fstab-generator[1266]: Ignoring "noauto" option for root device
	[  +2.228757] kauditd_printk_skb: 136 callbacks suppressed
	[  +0.343454] systemd-fstab-generator[1515]: Ignoring "noauto" option for root device
	[  +4.314486] systemd-fstab-generator[1701]: Ignoring "noauto" option for root device
	[  +0.386446] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.152718] systemd-fstab-generator[2104]: Ignoring "noauto" option for root device
	[  +0.081840] kauditd_printk_skb: 40 callbacks suppressed
	[Jul29 01:40] systemd-fstab-generator[2300]: Ignoring "noauto" option for root device
	[  +0.096619] kauditd_printk_skb: 12 callbacks suppressed
	[ +14.761772] kauditd_printk_skb: 60 callbacks suppressed
	[Jul29 01:41] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [e54a6e4f589e] <==
	{"level":"info","ts":"2024-07-29T01:39:52.172316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2024-07-29T01:39:52.172398Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-07-29T01:39:52.172634Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T01:39:52.172915Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e0290fa3161c5471","initial-advertise-peer-urls":["https://192.169.0.13:2380"],"listen-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T01:39:52.173008Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T01:39:52.175373Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-07-29T01:39:52.175411Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-07-29T01:39:52.605978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T01:39:52.606026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T01:39:52.60606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2024-07-29T01:39:52.606096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T01:39:52.606104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-07-29T01:39:52.606111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T01:39:52.606117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-07-29T01:39:52.611542Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-362000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T01:39:52.6118Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T01:39:52.616009Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T01:39:52.618374Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:39:52.622344Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T01:39:52.622402Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T01:39:52.623812Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T01:39:52.624929Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-07-29T01:39:52.624972Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:39:52.627332Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:39:52.62747Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 01:42:42 up 3 min,  0 users,  load average: 0.26, 0.22, 0.09
	Linux multinode-362000 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a44317c7df72] <==
	I0729 01:41:34.889201       1 main.go:299] handling current node
	I0729 01:41:44.894327       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:41:44.894427       1 main.go:299] handling current node
	I0729 01:41:44.894478       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:41:44.894499       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:41:54.890539       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:41:54.890564       1 main.go:299] handling current node
	I0729 01:41:54.890578       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:41:54.890583       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:42:04.885531       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:42:04.885603       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:42:04.885917       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:42:04.886044       1 main.go:299] handling current node
	I0729 01:42:14.885642       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:42:14.885721       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:42:14.886206       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:42:14.886227       1 main.go:299] handling current node
	I0729 01:42:24.887758       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:42:24.887826       1 main.go:299] handling current node
	I0729 01:42:24.887845       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:42:24.887854       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:42:34.895434       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:42:34.895488       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:42:34.895786       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:42:34.895828       1 main.go:299] handling current node
	
	
	==> kube-apiserver [f4075b746de3] <==
	I0729 01:39:55.027867       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0729 01:39:55.030632       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0729 01:39:55.031107       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 01:39:55.330517       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 01:39:55.358329       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 01:39:55.475281       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0729 01:39:55.479845       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0729 01:39:55.480523       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 01:39:55.483264       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 01:39:56.059443       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 01:39:56.382419       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 01:39:56.389290       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 01:39:56.394905       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 01:40:09.714656       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0729 01:40:10.014240       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0729 01:41:19.760754       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52605: use of closed network connection
	E0729 01:41:19.949242       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52607: use of closed network connection
	E0729 01:41:20.133177       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52609: use of closed network connection
	E0729 01:41:20.310477       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52611: use of closed network connection
	E0729 01:41:20.496662       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52613: use of closed network connection
	E0729 01:41:20.688244       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52615: use of closed network connection
	E0729 01:41:21.004331       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52618: use of closed network connection
	E0729 01:41:21.187855       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52620: use of closed network connection
	E0729 01:41:21.377063       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52622: use of closed network connection
	E0729 01:41:21.554865       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52624: use of closed network connection
	
	
	==> kube-controller-manager [898c4f8b2269] <==
	I0729 01:40:09.978740       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 01:40:10.434675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="418.130162ms"
	I0729 01:40:10.443618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="8.69659ms"
	I0729 01:40:10.443770       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.711µs"
	I0729 01:40:11.018935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.436686ms"
	I0729 01:40:11.027101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="8.124535ms"
	I0729 01:40:11.027181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.955µs"
	I0729 01:40:25.080337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="144.077µs"
	I0729 01:40:25.091162       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.818µs"
	I0729 01:40:26.585034       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.036µs"
	I0729 01:40:26.604104       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="7.022661ms"
	I0729 01:40:26.604164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.335µs"
	I0729 01:40:29.266767       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0729 01:40:51.188661       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-362000-m02\" does not exist"
	I0729 01:40:51.198306       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-362000-m02" podCIDRs=["10.244.1.0/24"]
	I0729 01:40:54.270525       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-362000-m02"
	I0729 01:41:14.160112       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-362000-m02"
	I0729 01:41:16.670352       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.140966ms"
	I0729 01:41:16.689017       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.156378ms"
	I0729 01:41:16.689239       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.248µs"
	I0729 01:41:16.690375       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.154µs"
	I0729 01:41:18.880601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.490626ms"
	I0729 01:41:18.880810       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.371µs"
	I0729 01:41:19.267756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.930765ms"
	I0729 01:41:19.267954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.527µs"
	
	
	==> kube-proxy [473044afd6a2] <==
	I0729 01:40:11.348502       1 server_linux.go:69] "Using iptables proxy"
	I0729 01:40:11.365653       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0729 01:40:11.402559       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 01:40:11.402601       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 01:40:11.402613       1 server_linux.go:165] "Using iptables Proxier"
	I0729 01:40:11.404701       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 01:40:11.404918       1 server.go:872] "Version info" version="v1.30.3"
	I0729 01:40:11.404927       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:40:11.405549       1 config.go:192] "Starting service config controller"
	I0729 01:40:11.405561       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 01:40:11.405574       1 config.go:101] "Starting endpoint slice config controller"
	I0729 01:40:11.405577       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 01:40:11.406068       1 config.go:319] "Starting node config controller"
	I0729 01:40:11.406074       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 01:40:11.505886       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 01:40:11.506110       1 shared_informer.go:320] Caches are synced for service config
	I0729 01:40:11.506263       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ef990ab76809] <==
	W0729 01:39:54.313459       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 01:39:54.313555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 01:39:54.313606       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 01:39:54.313700       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 01:39:54.319482       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 01:39:54.319640       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 01:39:54.320028       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 01:39:54.320142       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 01:39:54.320265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 01:39:54.320317       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 01:39:54.320410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 01:39:54.320468       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 01:39:54.320533       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 01:39:54.320584       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 01:39:54.326412       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 01:39:54.326519       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 01:39:54.326657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 01:39:54.326710       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 01:39:54.326731       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 01:39:54.326795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 01:39:55.161836       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 01:39:55.161876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 01:39:55.228811       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 01:39:55.228993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0729 01:39:55.708397       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 01:40:16 multinode-362000 kubelet[2112]: I0729 01:40:16.266374    2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4mw5v" podStartSLOduration=4.053341838 podStartE2EDuration="7.266360831s" podCreationTimestamp="2024-07-29 01:40:09 +0000 UTC" firstStartedPulling="2024-07-29 01:40:10.912699498 +0000 UTC m=+14.768334322" lastFinishedPulling="2024-07-29 01:40:14.125718491 +0000 UTC m=+17.981353315" observedRunningTime="2024-07-29 01:40:14.399483421 +0000 UTC m=+18.255118244" watchObservedRunningTime="2024-07-29 01:40:16.266360831 +0000 UTC m=+20.121995659"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.062085    2112 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.078175    2112 topology_manager.go:215] "Topology Admit Handler" podUID="a0fcbb6f-1182-4d9e-bc04-456f1b4de1db" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8npcw"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.079796    2112 topology_manager.go:215] "Topology Admit Handler" podUID="9032906f-5102-4224-b894-d541cf7d67e7" podNamespace="kube-system" podName="storage-provisioner"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.197585    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj8xr\" (UniqueName: \"kubernetes.io/projected/a0fcbb6f-1182-4d9e-bc04-456f1b4de1db-kube-api-access-sj8xr\") pod \"coredns-7db6d8ff4d-8npcw\" (UID: \"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db\") " pod="kube-system/coredns-7db6d8ff4d-8npcw"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.197676    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9032906f-5102-4224-b894-d541cf7d67e7-tmp\") pod \"storage-provisioner\" (UID: \"9032906f-5102-4224-b894-d541cf7d67e7\") " pod="kube-system/storage-provisioner"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.197706    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0fcbb6f-1182-4d9e-bc04-456f1b4de1db-config-volume\") pod \"coredns-7db6d8ff4d-8npcw\" (UID: \"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db\") " pod="kube-system/coredns-7db6d8ff4d-8npcw"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.197732    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpqg6\" (UniqueName: \"kubernetes.io/projected/9032906f-5102-4224-b894-d541cf7d67e7-kube-api-access-gpqg6\") pod \"storage-provisioner\" (UID: \"9032906f-5102-4224-b894-d541cf7d67e7\") " pod="kube-system/storage-provisioner"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.558955    2112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de282e66d4c0558a185d2943edde7cc6d15f7c8e33b53206d011dc03e8998611"
	Jul 29 01:40:25 multinode-362000 kubelet[2112]: I0729 01:40:25.563464    2112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28cbce0c6ed98e9c955fd2ad47b80253eef5c1d27aa60477f2b7c450ebe28396"
	Jul 29 01:40:26 multinode-362000 kubelet[2112]: I0729 01:40:26.585155    2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8npcw" podStartSLOduration=16.585141404 podStartE2EDuration="16.585141404s" podCreationTimestamp="2024-07-29 01:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 01:40:26.584921308 +0000 UTC m=+30.440556141" watchObservedRunningTime="2024-07-29 01:40:26.585141404 +0000 UTC m=+30.440776232"
	Jul 29 01:40:56 multinode-362000 kubelet[2112]: E0729 01:40:56.268334    2112 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:40:56 multinode-362000 kubelet[2112]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:40:56 multinode-362000 kubelet[2112]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:40:56 multinode-362000 kubelet[2112]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:40:56 multinode-362000 kubelet[2112]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:41:16 multinode-362000 kubelet[2112]: I0729 01:41:16.673625    2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=65.673612713 podStartE2EDuration="1m5.673612713s" podCreationTimestamp="2024-07-29 01:40:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 01:40:26.610162124 +0000 UTC m=+30.465796959" watchObservedRunningTime="2024-07-29 01:41:16.673612713 +0000 UTC m=+80.529247541"
	Jul 29 01:41:16 multinode-362000 kubelet[2112]: I0729 01:41:16.674168    2112 topology_manager.go:215] "Topology Admit Handler" podUID="d1dba4b3-d83f-47fc-beb4-89fb8b5cffa9" podNamespace="default" podName="busybox-fc5497c4f-8hq8g"
	Jul 29 01:41:16 multinode-362000 kubelet[2112]: I0729 01:41:16.765246    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb8zl\" (UniqueName: \"kubernetes.io/projected/d1dba4b3-d83f-47fc-beb4-89fb8b5cffa9-kube-api-access-qb8zl\") pod \"busybox-fc5497c4f-8hq8g\" (UID: \"d1dba4b3-d83f-47fc-beb4-89fb8b5cffa9\") " pod="default/busybox-fc5497c4f-8hq8g"
	Jul 29 01:41:21 multinode-362000 kubelet[2112]: E0729 01:41:21.188294    2112 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40458->127.0.0.1:39093: write tcp 127.0.0.1:40458->127.0.0.1:39093: write: broken pipe
	Jul 29 01:41:56 multinode-362000 kubelet[2112]: E0729 01:41:56.264596    2112 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:41:56 multinode-362000 kubelet[2112]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:41:56 multinode-362000 kubelet[2112]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:41:56 multinode-362000 kubelet[2112]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:41:56 multinode-362000 kubelet[2112]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-362000 -n multinode-362000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-362000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/CopyFile (3.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (220.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-362000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-362000
E0728 18:45:33.123837    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-362000: (18.817510746s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-362000 --wait=true -v=8 --alsologtostderr
E0728 18:45:50.057655    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 18:46:00.972404    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-362000 --wait=true -v=8 --alsologtostderr: exit status 90 (3m17.613492462s)

                                                
                                                
-- stdout --
	* [multinode-362000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "multinode-362000" primary control-plane node in "multinode-362000" cluster
	* Restarting existing hyperkit VM for "multinode-362000" ...
	* Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-362000-m02" worker node in "multinode-362000" cluster
	* Restarting existing hyperkit VM for "multinode-362000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.13
	* Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	  - env NO_PROXY=192.169.0.13
	* Verifying Kubernetes components...
	
	* Starting "multinode-362000-m03" worker node in "multinode-362000" cluster
	* Restarting existing hyperkit VM for "multinode-362000-m03" ...
	* Found network options:
	  - NO_PROXY=192.169.0.13,192.169.0.14
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:45:38.417840    4673 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:45:38.418019    4673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:45:38.418024    4673 out.go:304] Setting ErrFile to fd 2...
	I0728 18:45:38.418028    4673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:45:38.418193    4673 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 18:45:38.419696    4673 out.go:298] Setting JSON to false
	I0728 18:45:38.442261    4673 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4509,"bootTime":1722213029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 18:45:38.442355    4673 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:45:38.464048    4673 out.go:177] * [multinode-362000] minikube v1.33.1 on Darwin 14.5
	I0728 18:45:38.505773    4673 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:45:38.505813    4673 notify.go:220] Checking for updates...
	I0728 18:45:38.548494    4673 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:45:38.569795    4673 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 18:45:38.592752    4673 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:45:38.613666    4673 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 18:45:38.634551    4673 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:45:38.656368    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:45:38.656509    4673 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:45:38.656991    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:45:38.657052    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:45:38.666154    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52847
	I0728 18:45:38.666504    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:45:38.666920    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:45:38.666929    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:45:38.667143    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:45:38.667270    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:38.695663    4673 out.go:177] * Using the hyperkit driver based on existing profile
	I0728 18:45:38.737553    4673 start.go:297] selected driver: hyperkit
	I0728 18:45:38.737606    4673 start.go:901] validating driver "hyperkit" against &{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:45:38.737810    4673 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:45:38.737978    4673 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:45:38.738185    4673 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 18:45:38.747689    4673 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 18:45:38.751451    4673 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:45:38.751476    4673 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 18:45:38.754139    4673 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:45:38.754175    4673 cni.go:84] Creating CNI manager for ""
	I0728 18:45:38.754182    4673 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0728 18:45:38.754259    4673 start.go:340] cluster config:
	{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:45:38.754352    4673 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:45:38.796638    4673 out.go:177] * Starting "multinode-362000" primary control-plane node in "multinode-362000" cluster
	I0728 18:45:38.817741    4673 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:45:38.817811    4673 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 18:45:38.817836    4673 cache.go:56] Caching tarball of preloaded images
	I0728 18:45:38.818023    4673 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 18:45:38.818042    4673 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:45:38.818228    4673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:45:38.819184    4673 start.go:360] acquireMachinesLock for multinode-362000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:45:38.819306    4673 start.go:364] duration metric: took 97.069µs to acquireMachinesLock for "multinode-362000"
	I0728 18:45:38.819343    4673 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:45:38.819363    4673 fix.go:54] fixHost starting: 
	I0728 18:45:38.819803    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:45:38.819830    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:45:38.828721    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52849
	I0728 18:45:38.829083    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:45:38.829443    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:45:38.829454    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:45:38.829748    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:45:38.829914    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:38.830027    4673 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:45:38.830122    4673 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:45:38.830223    4673 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:45:38.831091    4673 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid 4468 missing from process table
	I0728 18:45:38.831121    4673 fix.go:112] recreateIfNeeded on multinode-362000: state=Stopped err=<nil>
	I0728 18:45:38.831135    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	W0728 18:45:38.831223    4673 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:45:38.872402    4673 out.go:177] * Restarting existing hyperkit VM for "multinode-362000" ...
	I0728 18:45:38.893469    4673 main.go:141] libmachine: (multinode-362000) Calling .Start
	I0728 18:45:38.893764    4673 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:45:38.893802    4673 main.go:141] libmachine: (multinode-362000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/hyperkit.pid
	I0728 18:45:38.895528    4673 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid 4468 missing from process table
	I0728 18:45:38.895559    4673 main.go:141] libmachine: (multinode-362000) DBG | pid 4468 is in state "Stopped"
	I0728 18:45:38.895596    4673 main.go:141] libmachine: (multinode-362000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/hyperkit.pid...
	I0728 18:45:38.896013    4673 main.go:141] libmachine: (multinode-362000) DBG | Using UUID 8122a2e4-0139-4f45-b808-288a2b40595b
	I0728 18:45:39.005368    4673 main.go:141] libmachine: (multinode-362000) DBG | Generated MAC e:8c:86:9:55:cf
	I0728 18:45:39.005393    4673 main.go:141] libmachine: (multinode-362000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000
	I0728 18:45:39.005522    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8122a2e4-0139-4f45-b808-288a2b40595b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ae4e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0728 18:45:39.005558    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8122a2e4-0139-4f45-b808-288a2b40595b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ae4e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0728 18:45:39.005591    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8122a2e4-0139-4f45-b808-288a2b40595b", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/multinode-362000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/bzimage,/Users/jenkins/minikube-integration/1931
2-1006/.minikube/machines/multinode-362000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"}
	I0728 18:45:39.005622    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8122a2e4-0139-4f45-b808-288a2b40595b -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/multinode-362000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"
	I0728 18:45:39.005634    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 18:45:39.007125    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 DEBUG: hyperkit: Pid is 4686
	I0728 18:45:39.007618    4673 main.go:141] libmachine: (multinode-362000) DBG | Attempt 0
	I0728 18:45:39.007633    4673 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:45:39.007728    4673 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4686
	I0728 18:45:39.009765    4673 main.go:141] libmachine: (multinode-362000) DBG | Searching for e:8c:86:9:55:cf in /var/db/dhcpd_leases ...
	I0728 18:45:39.009810    4673 main.go:141] libmachine: (multinode-362000) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0728 18:45:39.009837    4673 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a6f430}
	I0728 18:45:39.009858    4673 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a84496}
	I0728 18:45:39.009873    4673 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:45:39.009887    4673 main.go:141] libmachine: (multinode-362000) DBG | Found match: e:8c:86:9:55:cf
	I0728 18:45:39.009900    4673 main.go:141] libmachine: (multinode-362000) DBG | IP: 192.169.0.13
	I0728 18:45:39.009946    4673 main.go:141] libmachine: (multinode-362000) Calling .GetConfigRaw
	I0728 18:45:39.010720    4673 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:45:39.010962    4673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:45:39.011561    4673 machine.go:94] provisionDockerMachine start ...
	I0728 18:45:39.011574    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:39.011710    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:39.011855    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:39.011973    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:39.012065    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:39.012173    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:39.012309    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:45:39.012528    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:45:39.012539    4673 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 18:45:39.015353    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 18:45:39.067526    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 18:45:39.068273    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:45:39.068290    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:45:39.068303    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:45:39.068309    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:45:39.451282    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 18:45:39.451295    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 18:45:39.565812    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:45:39.565831    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:45:39.565861    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:45:39.565873    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:45:39.566705    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 18:45:39.566719    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 18:45:45.138571    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0728 18:45:45.138644    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0728 18:45:45.138656    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0728 18:45:45.162462    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:45 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0728 18:45:50.070800    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0728 18:45:50.070814    4673 main.go:141] libmachine: (multinode-362000) Calling .GetMachineName
	I0728 18:45:50.070951    4673 buildroot.go:166] provisioning hostname "multinode-362000"
	I0728 18:45:50.070963    4673 main.go:141] libmachine: (multinode-362000) Calling .GetMachineName
	I0728 18:45:50.071066    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:50.071167    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:50.071260    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.071343    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.071434    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:50.071571    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:45:50.071712    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:45:50.071726    4673 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-362000 && echo "multinode-362000" | sudo tee /etc/hostname
	I0728 18:45:50.134854    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-362000
	
	I0728 18:45:50.134872    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:50.134997    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:50.135116    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.135200    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.135297    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:50.135423    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:45:50.135563    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:45:50.135574    4673 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-362000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-362000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-362000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 18:45:50.196846    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:45:50.196869    4673 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 18:45:50.196892    4673 buildroot.go:174] setting up certificates
	I0728 18:45:50.196906    4673 provision.go:84] configureAuth start
	I0728 18:45:50.196914    4673 main.go:141] libmachine: (multinode-362000) Calling .GetMachineName
	I0728 18:45:50.197054    4673 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:45:50.197156    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:50.197243    4673 provision.go:143] copyHostCerts
	I0728 18:45:50.197277    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:45:50.197358    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 18:45:50.197367    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:45:50.197515    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 18:45:50.197722    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:45:50.197765    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 18:45:50.197769    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:45:50.197852    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 18:45:50.198031    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:45:50.198074    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 18:45:50.198079    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:45:50.198172    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 18:45:50.198353    4673 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.multinode-362000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-362000]
	I0728 18:45:50.322970    4673 provision.go:177] copyRemoteCerts
	I0728 18:45:50.323026    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 18:45:50.323055    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:50.323169    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:50.323269    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.323356    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:50.323453    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:45:50.356787    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 18:45:50.356852    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 18:45:50.375891    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 18:45:50.375948    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0728 18:45:50.394763    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 18:45:50.394825    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 18:45:50.414207    4673 provision.go:87] duration metric: took 217.291265ms to configureAuth
	I0728 18:45:50.414219    4673 buildroot.go:189] setting minikube options for container-runtime
	I0728 18:45:50.414383    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:45:50.414397    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:50.414539    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:50.414635    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:50.414726    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.414802    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.414885    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:50.414986    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:45:50.415110    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:45:50.415118    4673 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 18:45:50.467473    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 18:45:50.467486    4673 buildroot.go:70] root file system type: tmpfs
	I0728 18:45:50.467551    4673 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 18:45:50.467567    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:50.467707    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:50.467803    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.467913    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.468006    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:50.468136    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:45:50.468282    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:45:50.468326    4673 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 18:45:50.530974    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 18:45:50.531001    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:50.531127    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:50.531214    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.531298    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.531411    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:50.531541    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:45:50.531694    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:45:50.531706    4673 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 18:45:52.175000    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0728 18:45:52.175015    4673 machine.go:97] duration metric: took 13.163539557s to provisionDockerMachine
	I0728 18:45:52.175026    4673 start.go:293] postStartSetup for "multinode-362000" (driver="hyperkit")
	I0728 18:45:52.175033    4673 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 18:45:52.175047    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:52.175252    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 18:45:52.175266    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:52.175354    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:52.175448    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:52.175556    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:52.175637    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:45:52.213189    4673 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 18:45:52.217247    4673 command_runner.go:130] > NAME=Buildroot
	I0728 18:45:52.217257    4673 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0728 18:45:52.217261    4673 command_runner.go:130] > ID=buildroot
	I0728 18:45:52.217265    4673 command_runner.go:130] > VERSION_ID=2023.02.9
	I0728 18:45:52.217277    4673 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0728 18:45:52.217385    4673 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 18:45:52.217398    4673 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 18:45:52.217506    4673 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 18:45:52.217702    4673 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 18:45:52.217709    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /etc/ssl/certs/15332.pem
	I0728 18:45:52.217927    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 18:45:52.228721    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:45:52.261274    4673 start.go:296] duration metric: took 86.240044ms for postStartSetup
	I0728 18:45:52.261300    4673 fix.go:56] duration metric: took 13.442043564s for fixHost
	I0728 18:45:52.261313    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:52.261436    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:52.261529    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:52.261617    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:52.261699    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:52.261853    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:45:52.261989    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:45:52.261996    4673 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0728 18:45:52.314183    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722217552.447141799
	
	I0728 18:45:52.314194    4673 fix.go:216] guest clock: 1722217552.447141799
	I0728 18:45:52.314199    4673 fix.go:229] Guest: 2024-07-28 18:45:52.447141799 -0700 PDT Remote: 2024-07-28 18:45:52.261303 -0700 PDT m=+13.878368752 (delta=185.838799ms)
	I0728 18:45:52.314216    4673 fix.go:200] guest clock delta is within tolerance: 185.838799ms
	I0728 18:45:52.314219    4673 start.go:83] releasing machines lock for "multinode-362000", held for 13.495000417s
	I0728 18:45:52.314238    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:52.314391    4673 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:45:52.314503    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:52.314872    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:52.314986    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:52.315084    4673 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 18:45:52.315119    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:52.315146    4673 ssh_runner.go:195] Run: cat /version.json
	I0728 18:45:52.315159    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:52.315212    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:52.315241    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:52.315346    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:52.315362    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:52.315425    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:52.315449    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:52.315513    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:45:52.315535    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:45:52.402603    4673 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0728 18:45:52.403517    4673 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0728 18:45:52.403709    4673 ssh_runner.go:195] Run: systemctl --version
	I0728 18:45:52.408812    4673 command_runner.go:130] > systemd 252 (252)
	I0728 18:45:52.408834    4673 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0728 18:45:52.409070    4673 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0728 18:45:52.413189    4673 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0728 18:45:52.413232    4673 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 18:45:52.413280    4673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 18:45:52.426525    4673 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0728 18:45:52.426622    4673 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 18:45:52.426631    4673 start.go:495] detecting cgroup driver to use...
	I0728 18:45:52.426735    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:45:52.441487    4673 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0728 18:45:52.441777    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 18:45:52.450602    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 18:45:52.459645    4673 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 18:45:52.459689    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 18:45:52.468580    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:45:52.477277    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 18:45:52.486024    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:45:52.494784    4673 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 18:45:52.503698    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 18:45:52.512471    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 18:45:52.521118    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 18:45:52.529925    4673 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 18:45:52.537899    4673 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0728 18:45:52.538051    4673 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 18:45:52.546207    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:45:52.648661    4673 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 18:45:52.668071    4673 start.go:495] detecting cgroup driver to use...
	I0728 18:45:52.668148    4673 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 18:45:52.681866    4673 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0728 18:45:52.682026    4673 command_runner.go:130] > [Unit]
	I0728 18:45:52.682036    4673 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 18:45:52.682044    4673 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 18:45:52.682050    4673 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0728 18:45:52.682054    4673 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0728 18:45:52.682058    4673 command_runner.go:130] > StartLimitBurst=3
	I0728 18:45:52.682062    4673 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 18:45:52.682066    4673 command_runner.go:130] > [Service]
	I0728 18:45:52.682069    4673 command_runner.go:130] > Type=notify
	I0728 18:45:52.682072    4673 command_runner.go:130] > Restart=on-failure
	I0728 18:45:52.682079    4673 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 18:45:52.682087    4673 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 18:45:52.682093    4673 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 18:45:52.682099    4673 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 18:45:52.682105    4673 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 18:45:52.682114    4673 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 18:45:52.682121    4673 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 18:45:52.682130    4673 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 18:45:52.682137    4673 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 18:45:52.682141    4673 command_runner.go:130] > ExecStart=
	I0728 18:45:52.682153    4673 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0728 18:45:52.682156    4673 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 18:45:52.682162    4673 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 18:45:52.682167    4673 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 18:45:52.682172    4673 command_runner.go:130] > LimitNOFILE=infinity
	I0728 18:45:52.682175    4673 command_runner.go:130] > LimitNPROC=infinity
	I0728 18:45:52.682179    4673 command_runner.go:130] > LimitCORE=infinity
	I0728 18:45:52.682185    4673 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 18:45:52.682190    4673 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 18:45:52.682193    4673 command_runner.go:130] > TasksMax=infinity
	I0728 18:45:52.682197    4673 command_runner.go:130] > TimeoutStartSec=0
	I0728 18:45:52.682202    4673 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 18:45:52.682205    4673 command_runner.go:130] > Delegate=yes
	I0728 18:45:52.682210    4673 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 18:45:52.682214    4673 command_runner.go:130] > KillMode=process
	I0728 18:45:52.682218    4673 command_runner.go:130] > [Install]
	I0728 18:45:52.682230    4673 command_runner.go:130] > WantedBy=multi-user.target
	I0728 18:45:52.682352    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:45:52.694437    4673 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 18:45:52.714095    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:45:52.724786    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:45:52.734755    4673 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 18:45:52.757057    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:45:52.767836    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:45:52.783282    4673 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 18:45:52.783636    4673 ssh_runner.go:195] Run: which cri-dockerd
	I0728 18:45:52.786451    4673 command_runner.go:130] > /usr/bin/cri-dockerd
	I0728 18:45:52.786625    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 18:45:52.793644    4673 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 18:45:52.807004    4673 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 18:45:52.902471    4673 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 18:45:52.993894    4673 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 18:45:52.993959    4673 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 18:45:53.008812    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:45:53.107610    4673 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:45:55.429561    4673 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.321948073s)
	I0728 18:45:55.429625    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0728 18:45:55.441155    4673 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0728 18:45:55.453910    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:45:55.464413    4673 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0728 18:45:55.559169    4673 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 18:45:55.663530    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:45:55.779347    4673 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0728 18:45:55.792910    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:45:55.803704    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:45:55.899175    4673 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0728 18:45:55.958796    4673 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 18:45:55.958854    4673 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 18:45:55.962856    4673 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0728 18:45:55.962869    4673 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0728 18:45:55.962874    4673 command_runner.go:130] > Device: 0,22	Inode: 747         Links: 1
	I0728 18:45:55.962888    4673 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0728 18:45:55.962896    4673 command_runner.go:130] > Access: 2024-07-29 01:45:56.043761094 +0000
	I0728 18:45:55.962904    4673 command_runner.go:130] > Modify: 2024-07-29 01:45:56.043761094 +0000
	I0728 18:45:55.962911    4673 command_runner.go:130] > Change: 2024-07-29 01:45:56.045760874 +0000
	I0728 18:45:55.962930    4673 command_runner.go:130] >  Birth: -
	I0728 18:45:55.962992    4673 start.go:563] Will wait 60s for crictl version
	I0728 18:45:55.963033    4673 ssh_runner.go:195] Run: which crictl
	I0728 18:45:55.965939    4673 command_runner.go:130] > /usr/bin/crictl
	I0728 18:45:55.966156    4673 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0728 18:45:55.993479    4673 command_runner.go:130] > Version:  0.1.0
	I0728 18:45:55.993491    4673 command_runner.go:130] > RuntimeName:  docker
	I0728 18:45:55.993495    4673 command_runner.go:130] > RuntimeVersion:  27.1.0
	I0728 18:45:55.993499    4673 command_runner.go:130] > RuntimeApiVersion:  v1
	I0728 18:45:55.994588    4673 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0728 18:45:55.994652    4673 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:45:56.012242    4673 command_runner.go:130] > 27.1.0
	I0728 18:45:56.012372    4673 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:45:56.030310    4673 command_runner.go:130] > 27.1.0
	I0728 18:45:56.071630    4673 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0728 18:45:56.071677    4673 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:45:56.072056    4673 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0728 18:45:56.076440    4673 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:45:56.085792    4673 kubeadm.go:883] updating cluster {Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0728 18:45:56.085876    4673 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:45:56.085938    4673 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 18:45:56.099093    4673 command_runner.go:130] > kindest/kindnetd:v20240719-e7903573
	I0728 18:45:56.099107    4673 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0728 18:45:56.099112    4673 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0728 18:45:56.099116    4673 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0728 18:45:56.099119    4673 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0728 18:45:56.099140    4673 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0728 18:45:56.099160    4673 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0728 18:45:56.099165    4673 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0728 18:45:56.099169    4673 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:45:56.099173    4673 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0728 18:45:56.099743    4673 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0728 18:45:56.099751    4673 docker.go:615] Images already preloaded, skipping extraction
	I0728 18:45:56.099827    4673 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 18:45:56.111107    4673 command_runner.go:130] > kindest/kindnetd:v20240719-e7903573
	I0728 18:45:56.111120    4673 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0728 18:45:56.111124    4673 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0728 18:45:56.111132    4673 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0728 18:45:56.111136    4673 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0728 18:45:56.111143    4673 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0728 18:45:56.111147    4673 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0728 18:45:56.111151    4673 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0728 18:45:56.111155    4673 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:45:56.111159    4673 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0728 18:45:56.111676    4673 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0728 18:45:56.111699    4673 cache_images.go:84] Images are preloaded, skipping loading
	I0728 18:45:56.111712    4673 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0728 18:45:56.111800    4673 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-362000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0728 18:45:56.111865    4673 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 18:45:56.146274    4673 command_runner.go:130] > cgroupfs
	I0728 18:45:56.146885    4673 cni.go:84] Creating CNI manager for ""
	I0728 18:45:56.146895    4673 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0728 18:45:56.146906    4673 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0728 18:45:56.146922    4673 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-362000 NodeName:multinode-362000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0728 18:45:56.147002    4673 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-362000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 18:45:56.147062    4673 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0728 18:45:56.154499    4673 command_runner.go:130] > kubeadm
	I0728 18:45:56.154508    4673 command_runner.go:130] > kubectl
	I0728 18:45:56.154512    4673 command_runner.go:130] > kubelet
	I0728 18:45:56.154526    4673 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 18:45:56.154570    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 18:45:56.161753    4673 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0728 18:45:56.175166    4673 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 18:45:56.188501    4673 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0728 18:45:56.201915    4673 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0728 18:45:56.204741    4673 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:45:56.213831    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:45:56.314251    4673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:45:56.327877    4673 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000 for IP: 192.169.0.13
	I0728 18:45:56.327888    4673 certs.go:194] generating shared ca certs ...
	I0728 18:45:56.327898    4673 certs.go:226] acquiring lock for ca certs: {Name:mk64aac07da96a39ae6165406ad142fbce2d0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:45:56.328070    4673 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key
	I0728 18:45:56.328149    4673 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key
	I0728 18:45:56.328160    4673 certs.go:256] generating profile certs ...
	I0728 18:45:56.328253    4673 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key
	I0728 18:45:56.328332    4673 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key.cf2f2b57
	I0728 18:45:56.328411    4673 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.key
	I0728 18:45:56.328419    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0728 18:45:56.328440    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0728 18:45:56.328458    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0728 18:45:56.328476    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0728 18:45:56.328493    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0728 18:45:56.328522    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0728 18:45:56.328552    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0728 18:45:56.328574    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0728 18:45:56.328677    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem (1338 bytes)
	W0728 18:45:56.328726    4673 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533_empty.pem, impossibly tiny 0 bytes
	I0728 18:45:56.328735    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem (1675 bytes)
	I0728 18:45:56.328769    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem (1078 bytes)
	I0728 18:45:56.328817    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem (1123 bytes)
	I0728 18:45:56.328854    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem (1679 bytes)
	I0728 18:45:56.328933    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:45:56.328968    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /usr/share/ca-certificates/15332.pem
	I0728 18:45:56.328989    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:45:56.329006    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem -> /usr/share/ca-certificates/1533.pem
	I0728 18:45:56.329433    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 18:45:56.360113    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 18:45:56.384683    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 18:45:56.414348    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 18:45:56.438537    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0728 18:45:56.458006    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 18:45:56.477011    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 18:45:56.496093    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0728 18:45:56.515234    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /usr/share/ca-certificates/15332.pem (1708 bytes)
	I0728 18:45:56.534555    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 18:45:56.553842    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem --> /usr/share/ca-certificates/1533.pem (1338 bytes)
	I0728 18:45:56.573050    4673 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 18:45:56.586612    4673 ssh_runner.go:195] Run: openssl version
	I0728 18:45:56.590610    4673 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0728 18:45:56.590830    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 18:45:56.599807    4673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:45:56.602970    4673 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:45:56.603135    4673 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:45:56.603178    4673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:45:56.607074    4673 command_runner.go:130] > b5213941
	I0728 18:45:56.607310    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 18:45:56.616281    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1533.pem && ln -fs /usr/share/ca-certificates/1533.pem /etc/ssl/certs/1533.pem"
	I0728 18:45:56.625173    4673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1533.pem
	I0728 18:45:56.628303    4673 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 00:57 /usr/share/ca-certificates/1533.pem
	I0728 18:45:56.628476    4673 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:57 /usr/share/ca-certificates/1533.pem
	I0728 18:45:56.628509    4673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1533.pem
	I0728 18:45:56.632415    4673 command_runner.go:130] > 51391683
	I0728 18:45:56.632627    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1533.pem /etc/ssl/certs/51391683.0"
	I0728 18:45:56.641669    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15332.pem && ln -fs /usr/share/ca-certificates/15332.pem /etc/ssl/certs/15332.pem"
	I0728 18:45:56.650722    4673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15332.pem
	I0728 18:45:56.653803    4673 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 00:57 /usr/share/ca-certificates/15332.pem
	I0728 18:45:56.653989    4673 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:57 /usr/share/ca-certificates/15332.pem
	I0728 18:45:56.654026    4673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15332.pem
	I0728 18:45:56.657897    4673 command_runner.go:130] > 3ec20f2e
	I0728 18:45:56.658048    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15332.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 18:45:56.666799    4673 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0728 18:45:56.669910    4673 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0728 18:45:56.669920    4673 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0728 18:45:56.669925    4673 command_runner.go:130] > Device: 253,1	Inode: 531528      Links: 1
	I0728 18:45:56.669936    4673 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0728 18:45:56.669941    4673 command_runner.go:130] > Access: 2024-07-29 01:39:47.972565447 +0000
	I0728 18:45:56.669946    4673 command_runner.go:130] > Modify: 2024-07-29 01:39:47.972565447 +0000
	I0728 18:45:56.669950    4673 command_runner.go:130] > Change: 2024-07-29 01:39:47.972565447 +0000
	I0728 18:45:56.669955    4673 command_runner.go:130] >  Birth: 2024-07-29 01:39:47.972565447 +0000
	I0728 18:45:56.670100    4673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0728 18:45:56.674117    4673 command_runner.go:130] > Certificate will not expire
	I0728 18:45:56.674335    4673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0728 18:45:56.678337    4673 command_runner.go:130] > Certificate will not expire
	I0728 18:45:56.678524    4673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0728 18:45:56.682543    4673 command_runner.go:130] > Certificate will not expire
	I0728 18:45:56.682745    4673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0728 18:45:56.686691    4673 command_runner.go:130] > Certificate will not expire
	I0728 18:45:56.686874    4673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0728 18:45:56.690811    4673 command_runner.go:130] > Certificate will not expire
	I0728 18:45:56.690989    4673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0728 18:45:56.694929    4673 command_runner.go:130] > Certificate will not expire
	I0728 18:45:56.695116    4673 kubeadm.go:392] StartCluster: {Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:45:56.695246    4673 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 18:45:56.707569    4673 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 18:45:56.715778    4673 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0728 18:45:56.715788    4673 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0728 18:45:56.715792    4673 command_runner.go:130] > /var/lib/minikube/etcd:
	I0728 18:45:56.715795    4673 command_runner.go:130] > member
	I0728 18:45:56.715913    4673 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0728 18:45:56.715924    4673 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0728 18:45:56.715960    4673 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 18:45:56.724101    4673 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 18:45:56.724439    4673 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-362000" does not appear in /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:45:56.724526    4673 kubeconfig.go:62] /Users/jenkins/minikube-integration/19312-1006/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-362000" cluster setting kubeconfig missing "multinode-362000" context setting]
	I0728 18:45:56.724729    4673 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/kubeconfig: {Name:mk76ac5b4283108fca1a66cc5cd0791fbea0691d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:45:56.725352    4673 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:45:56.725564    4673 kapi.go:59] client config for multinode-362000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10bd5b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:45:56.725891    4673 cert_rotation.go:137] Starting client certificate rotation controller
	I0728 18:45:56.726067    4673 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 18:45:56.733884    4673 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.13
	I0728 18:45:56.733899    4673 kubeadm.go:1160] stopping kube-system containers ...
	I0728 18:45:56.733958    4673 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 18:45:56.747508    4673 command_runner.go:130] > 4e01b33bc28c
	I0728 18:45:56.747520    4673 command_runner.go:130] > 1255904b9cda
	I0728 18:45:56.747524    4673 command_runner.go:130] > 28cbce0c6ed9
	I0728 18:45:56.747527    4673 command_runner.go:130] > de282e66d4c0
	I0728 18:45:56.747530    4673 command_runner.go:130] > a44317c7df72
	I0728 18:45:56.747543    4673 command_runner.go:130] > 473044afd6a2
	I0728 18:45:56.747547    4673 command_runner.go:130] > 3050e483a8a8
	I0728 18:45:56.747550    4673 command_runner.go:130] > a8dcd682eb59
	I0728 18:45:56.747553    4673 command_runner.go:130] > 898c4f8b2269
	I0728 18:45:56.747559    4673 command_runner.go:130] > f4075b746de3
	I0728 18:45:56.747564    4673 command_runner.go:130] > ef990ab76809
	I0728 18:45:56.747568    4673 command_runner.go:130] > e54a6e4f589e
	I0728 18:45:56.747571    4673 command_runner.go:130] > c5e0cac22c05
	I0728 18:45:56.747575    4673 command_runner.go:130] > 9bd37faa2f0a
	I0728 18:45:56.747578    4673 command_runner.go:130] > 1e7d4787a9c3
	I0728 18:45:56.747581    4673 command_runner.go:130] > 9ebd1495f389
	I0728 18:45:56.748134    4673 docker.go:483] Stopping containers: [4e01b33bc28c 1255904b9cda 28cbce0c6ed9 de282e66d4c0 a44317c7df72 473044afd6a2 3050e483a8a8 a8dcd682eb59 898c4f8b2269 f4075b746de3 ef990ab76809 e54a6e4f589e c5e0cac22c05 9bd37faa2f0a 1e7d4787a9c3 9ebd1495f389]
	I0728 18:45:56.748209    4673 ssh_runner.go:195] Run: docker stop 4e01b33bc28c 1255904b9cda 28cbce0c6ed9 de282e66d4c0 a44317c7df72 473044afd6a2 3050e483a8a8 a8dcd682eb59 898c4f8b2269 f4075b746de3 ef990ab76809 e54a6e4f589e c5e0cac22c05 9bd37faa2f0a 1e7d4787a9c3 9ebd1495f389
	I0728 18:45:56.760719    4673 command_runner.go:130] > 4e01b33bc28c
	I0728 18:45:56.760732    4673 command_runner.go:130] > 1255904b9cda
	I0728 18:45:56.760735    4673 command_runner.go:130] > 28cbce0c6ed9
	I0728 18:45:56.760947    4673 command_runner.go:130] > de282e66d4c0
	I0728 18:45:56.763002    4673 command_runner.go:130] > a44317c7df72
	I0728 18:45:56.764177    4673 command_runner.go:130] > 473044afd6a2
	I0728 18:45:56.764193    4673 command_runner.go:130] > 3050e483a8a8
	I0728 18:45:56.764198    4673 command_runner.go:130] > a8dcd682eb59
	I0728 18:45:56.764201    4673 command_runner.go:130] > 898c4f8b2269
	I0728 18:45:56.764205    4673 command_runner.go:130] > f4075b746de3
	I0728 18:45:56.764208    4673 command_runner.go:130] > ef990ab76809
	I0728 18:45:56.764211    4673 command_runner.go:130] > e54a6e4f589e
	I0728 18:45:56.764215    4673 command_runner.go:130] > c5e0cac22c05
	I0728 18:45:56.764218    4673 command_runner.go:130] > 9bd37faa2f0a
	I0728 18:45:56.764222    4673 command_runner.go:130] > 1e7d4787a9c3
	I0728 18:45:56.764225    4673 command_runner.go:130] > 9ebd1495f389
	I0728 18:45:56.765046    4673 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 18:45:56.777782    4673 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 18:45:56.785743    4673 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0728 18:45:56.785754    4673 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0728 18:45:56.785760    4673 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0728 18:45:56.785765    4673 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 18:45:56.785958    4673 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 18:45:56.785966    4673 kubeadm.go:157] found existing configuration files:
	
	I0728 18:45:56.786004    4673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 18:45:56.793624    4673 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0728 18:45:56.793639    4673 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0728 18:45:56.793681    4673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0728 18:45:56.801434    4673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 18:45:56.808929    4673 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0728 18:45:56.808944    4673 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0728 18:45:56.808980    4673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0728 18:45:56.816960    4673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 18:45:56.824507    4673 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0728 18:45:56.824525    4673 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0728 18:45:56.824561    4673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 18:45:56.832448    4673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 18:45:56.840091    4673 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0728 18:45:56.840107    4673 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0728 18:45:56.840137    4673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 18:45:56.847993    4673 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 18:45:56.855855    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:45:56.931374    4673 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0728 18:45:56.931387    4673 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0728 18:45:56.931392    4673 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0728 18:45:56.931397    4673 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0728 18:45:56.931404    4673 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0728 18:45:56.931410    4673 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0728 18:45:56.931415    4673 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0728 18:45:56.931421    4673 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0728 18:45:56.931426    4673 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0728 18:45:56.931432    4673 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0728 18:45:56.931437    4673 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0728 18:45:56.931441    4673 command_runner.go:130] > [certs] Using the existing "sa" key
	I0728 18:45:56.931458    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:45:56.972637    4673 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0728 18:45:57.092111    4673 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0728 18:45:57.430834    4673 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0728 18:45:57.545975    4673 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0728 18:45:57.694596    4673 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0728 18:45:57.837182    4673 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0728 18:45:57.839024    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:45:57.887965    4673 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0728 18:45:57.887980    4673 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0728 18:45:57.887985    4673 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0728 18:45:58.004235    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:45:58.063887    4673 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0728 18:45:58.063905    4673 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0728 18:45:58.066931    4673 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0728 18:45:58.070813    4673 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0728 18:45:58.072137    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:45:58.132428    4673 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0728 18:45:58.140407    4673 api_server.go:52] waiting for apiserver process to appear ...
	I0728 18:45:58.140471    4673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:45:58.641196    4673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:45:59.140593    4673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:45:59.153956    4673 command_runner.go:130] > 1742
	I0728 18:45:59.153999    4673 api_server.go:72] duration metric: took 1.013610274s to wait for apiserver process to appear ...
	I0728 18:45:59.154007    4673 api_server.go:88] waiting for apiserver healthz status ...
	I0728 18:45:59.154023    4673 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:46:01.283789    4673 api_server.go:279] https://192.169.0.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0728 18:46:01.283808    4673 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0728 18:46:01.283816    4673 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:46:01.329010    4673 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 18:46:01.329031    4673 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 18:46:01.655183    4673 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:46:01.660000    4673 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 18:46:01.660019    4673 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 18:46:02.154174    4673 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:46:02.157536    4673 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 18:46:02.157553    4673 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 18:46:02.655261    4673 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:46:02.659989    4673 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0728 18:46:02.660053    4673 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0728 18:46:02.660059    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:02.660066    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:02.660070    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:02.668512    4673 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0728 18:46:02.668524    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:02.668530    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:02.668533    4673 round_trippers.go:580]     Content-Length: 263
	I0728 18:46:02.668535    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:02 GMT
	I0728 18:46:02.668537    4673 round_trippers.go:580]     Audit-Id: 8f70f441-9df6-47ba-a3cc-867901aa7c72
	I0728 18:46:02.668539    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:02.668542    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:02.668549    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:02.668588    4673 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0728 18:46:02.668657    4673 api_server.go:141] control plane version: v1.30.3
	I0728 18:46:02.668669    4673 api_server.go:131] duration metric: took 3.514682856s to wait for apiserver health ...
	I0728 18:46:02.668676    4673 cni.go:84] Creating CNI manager for ""
	I0728 18:46:02.668680    4673 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0728 18:46:02.690995    4673 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0728 18:46:02.711028    4673 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0728 18:46:02.717331    4673 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0728 18:46:02.717346    4673 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0728 18:46:02.717351    4673 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0728 18:46:02.717356    4673 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0728 18:46:02.717365    4673 command_runner.go:130] > Access: 2024-07-29 01:45:49.171141326 +0000
	I0728 18:46:02.717370    4673 command_runner.go:130] > Modify: 2024-07-23 05:15:32.000000000 +0000
	I0728 18:46:02.717374    4673 command_runner.go:130] > Change: 2024-07-29 01:45:46.978185440 +0000
	I0728 18:46:02.717378    4673 command_runner.go:130] >  Birth: -
	I0728 18:46:02.717629    4673 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0728 18:46:02.717637    4673 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0728 18:46:02.735872    4673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0728 18:46:03.116876    4673 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0728 18:46:03.136590    4673 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0728 18:46:03.205885    4673 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0728 18:46:03.245561    4673 command_runner.go:130] > daemonset.apps/kindnet configured
	I0728 18:46:03.246956    4673 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 18:46:03.247010    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:46:03.247017    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.247025    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.247029    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.249490    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:03.249499    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.249504    4673 round_trippers.go:580]     Audit-Id: dd42f93b-27cd-4a41-b3a1-a670734a78af
	I0728 18:46:03.249508    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.249511    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.249514    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.249517    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.249519    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.250510    4673 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"846"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87605 chars]
	I0728 18:46:03.254552    4673 system_pods.go:59] 12 kube-system pods found
	I0728 18:46:03.254573    4673 system_pods.go:61] "coredns-7db6d8ff4d-8npcw" [a0fcbb6f-1182-4d9e-bc04-456f1b4de1db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0728 18:46:03.254579    4673 system_pods.go:61] "etcd-multinode-362000" [7b75e781-36f1-4f6f-99a4-808974571bcd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0728 18:46:03.254585    4673 system_pods.go:61] "kindnet-4mw5v" [053773ee-043a-48e0-9f70-411430b19acd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0728 18:46:03.254588    4673 system_pods.go:61] "kindnet-5dhhf" [e124802a-dbb6-4100-8c49-8a75ea05217a] Running
	I0728 18:46:03.254591    4673 system_pods.go:61] "kindnet-8hhwv" [487e32b7-7175-4187-89ba-90bb4d597681] Running
	I0728 18:46:03.254595    4673 system_pods.go:61] "kube-apiserver-multinode-362000" [95b0fc9b-aad1-47ad-ae00-439b4e4b905a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0728 18:46:03.254600    4673 system_pods.go:61] "kube-controller-manager-multinode-362000" [5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0728 18:46:03.254603    4673 system_pods.go:61] "kube-proxy-7gm24" [9db42267-b01f-40a3-bf21-c4d8cf6fb372] Running
	I0728 18:46:03.254606    4673 system_pods.go:61] "kube-proxy-dzz6p" [577d6ba2-e17a-426f-8315-1688766fa435] Running
	I0728 18:46:03.254610    4673 system_pods.go:61] "kube-proxy-tz5h5" [f791f783-464c-485b-9eda-97a5f857cca4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0728 18:46:03.254614    4673 system_pods.go:61] "kube-scheduler-multinode-362000" [0299d0c0-d45d-45ee-9b8e-b5900e92694b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 18:46:03.254618    4673 system_pods.go:61] "storage-provisioner" [9032906f-5102-4224-b894-d541cf7d67e7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0728 18:46:03.254623    4673 system_pods.go:74] duration metric: took 7.66063ms to wait for pod list to return data ...
	I0728 18:46:03.254629    4673 node_conditions.go:102] verifying NodePressure condition ...
	I0728 18:46:03.254667    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0728 18:46:03.254672    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.254677    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.254681    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.256449    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:03.256459    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.256467    4673 round_trippers.go:580]     Audit-Id: 662ec3c8-4097-484a-8e4a-fbb1205be3b7
	I0728 18:46:03.256472    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.256475    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.256481    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.256486    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.256495    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.256655    4673 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"846"},"items":[{"metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14303 chars]
	I0728 18:46:03.257221    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:46:03.257234    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:46:03.257244    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:46:03.257247    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:46:03.257251    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:46:03.257256    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:46:03.257260    4673 node_conditions.go:105] duration metric: took 2.627088ms to run NodePressure ...
	I0728 18:46:03.257272    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:46:03.476491    4673 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0728 18:46:03.560024    4673 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0728 18:46:03.561221    4673 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0728 18:46:03.561302    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0728 18:46:03.561314    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.561323    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.561329    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.564327    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:03.564345    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.564357    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.564366    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.564373    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.564377    4673 round_trippers.go:580]     Audit-Id: 5398763f-98bb-4d63-b62f-65eae8f2bf8c
	I0728 18:46:03.564383    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.564387    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.564706    4673 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"851"},"items":[{"metadata":{"name":"etcd-multinode-362000","namespace":"kube-system","uid":"7b75e781-36f1-4f6f-99a4-808974571bcd","resourceVersion":"835","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.mirror":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.seen":"2024-07-29T01:39:56.230156002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30912 chars]
	I0728 18:46:03.565447    4673 kubeadm.go:739] kubelet initialised
	I0728 18:46:03.565457    4673 kubeadm.go:740] duration metric: took 4.224667ms waiting for restarted kubelet to initialise ...
	I0728 18:46:03.565464    4673 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:46:03.565496    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:46:03.565501    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.565507    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.565512    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.567799    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:03.567810    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.567815    4673 round_trippers.go:580]     Audit-Id: 71e7cf77-43dd-4eba-83ad-aec1770533f7
	I0728 18:46:03.567818    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.567821    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.567824    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.567827    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.567829    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.569091    4673 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"851"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87012 chars]
	I0728 18:46:03.571083    4673 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:03.571138    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:03.571144    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.571150    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.571155    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.572865    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:03.572879    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.572885    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.572889    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.572893    4673 round_trippers.go:580]     Audit-Id: 303f0de4-e0fa-4af7-b2cf-e9f991463329
	I0728 18:46:03.572896    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.572915    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.572924    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.573039    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:03.573358    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:03.573366    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.573373    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.573379    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.575099    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:03.575116    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.575125    4673 round_trippers.go:580]     Audit-Id: 002d08be-b007-4e7e-9108-b8d1a891c201
	I0728 18:46:03.575129    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.575152    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.575162    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.575169    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.575173    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.575479    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:03.575792    4673 pod_ready.go:97] node "multinode-362000" hosting pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:03.575810    4673 pod_ready.go:81] duration metric: took 4.711453ms for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	E0728 18:46:03.575822    4673 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-362000" hosting pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:03.575835    4673 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:03.575896    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-362000
	I0728 18:46:03.575904    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.575913    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.575918    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.577693    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:03.577718    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.577725    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.577730    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.577733    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.577737    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.577740    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.577743    4673 round_trippers.go:580]     Audit-Id: caa7915d-a454-4b20-a4c7-f046a70c29ae
	I0728 18:46:03.577872    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-362000","namespace":"kube-system","uid":"7b75e781-36f1-4f6f-99a4-808974571bcd","resourceVersion":"835","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.mirror":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.seen":"2024-07-29T01:39:56.230156002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0728 18:46:03.578174    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:03.578182    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.578188    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.578193    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.579777    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:03.579794    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.579804    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.579810    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.579816    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.579822    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.579827    4673 round_trippers.go:580]     Audit-Id: 8388909f-28e5-41f0-9e2b-2accd82fdb2c
	I0728 18:46:03.579831    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.580001    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:03.580271    4673 pod_ready.go:97] node "multinode-362000" hosting pod "etcd-multinode-362000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:03.580284    4673 pod_ready.go:81] duration metric: took 4.441108ms for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	E0728 18:46:03.580292    4673 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-362000" hosting pod "etcd-multinode-362000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:03.580305    4673 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:03.580345    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-362000
	I0728 18:46:03.580351    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.580357    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.580361    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.582253    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:03.582265    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.582270    4673 round_trippers.go:580]     Audit-Id: ff2b1fa2-01db-45c0-9dde-77f359073a3e
	I0728 18:46:03.582274    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.582278    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.582281    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.582284    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.582287    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.582386    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-362000","namespace":"kube-system","uid":"95b0fc9b-aad1-47ad-ae00-439b4e4b905a","resourceVersion":"838","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.mirror":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.seen":"2024-07-29T01:39:56.230158669Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8135 chars]
	I0728 18:46:03.582697    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:03.582706    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.582712    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.582716    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.584391    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:03.584402    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.584408    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.584411    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.584414    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.584417    4673 round_trippers.go:580]     Audit-Id: 70157737-48ef-440c-a2fa-d76a7118783f
	I0728 18:46:03.584419    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.584422    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.584882    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:03.585118    4673 pod_ready.go:97] node "multinode-362000" hosting pod "kube-apiserver-multinode-362000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:03.585129    4673 pod_ready.go:81] duration metric: took 4.817707ms for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	E0728 18:46:03.585136    4673 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-362000" hosting pod "kube-apiserver-multinode-362000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:03.585144    4673 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:03.585187    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-362000
	I0728 18:46:03.585192    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.585197    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.585202    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.587217    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:03.587230    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.587235    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.587238    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.587240    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.587243    4673 round_trippers.go:580]     Audit-Id: 46934f17-f1e2-4937-8162-9c93621655cb
	I0728 18:46:03.587245    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.587248    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.587348    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-362000","namespace":"kube-system","uid":"5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9","resourceVersion":"839","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.mirror":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.seen":"2024-07-29T01:39:56.230159555Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7726 chars]
	I0728 18:46:03.647355    4673 request.go:629] Waited for 59.673173ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:03.647406    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:03.647415    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.647426    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.647434    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.649728    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:03.649739    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.649746    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.649751    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.649755    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.649758    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.649761    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.649764    4673 round_trippers.go:580]     Audit-Id: 2c3f7e32-6a26-47e9-8afc-4ce7375e35c5
	I0728 18:46:03.650362    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:03.650555    4673 pod_ready.go:97] node "multinode-362000" hosting pod "kube-controller-manager-multinode-362000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:03.650569    4673 pod_ready.go:81] duration metric: took 65.419076ms for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	E0728 18:46:03.650576    4673 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-362000" hosting pod "kube-controller-manager-multinode-362000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:03.650582    4673 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7gm24" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:03.848587    4673 request.go:629] Waited for 197.964405ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7gm24
	I0728 18:46:03.848742    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7gm24
	I0728 18:46:03.848753    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.848764    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.848770    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.851206    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:03.851227    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.851237    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.851246    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.851251    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.851256    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.851259    4673 round_trippers.go:580]     Audit-Id: a8aed1d7-0eef-4626-9dc9-e26aba8bade3
	I0728 18:46:03.851264    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.851461    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7gm24","generateName":"kube-proxy-","namespace":"kube-system","uid":"9db42267-b01f-40a3-bf21-c4d8cf6fb372","resourceVersion":"791","creationTimestamp":"2024-07-29T01:44:55Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0728 18:46:04.048596    4673 request.go:629] Waited for 196.805459ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m03
	I0728 18:46:04.048724    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m03
	I0728 18:46:04.048735    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:04.048746    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:04.048752    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:04.050870    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:04.050881    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:04.050887    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:04.050891    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:04.050896    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:04.050900    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:04.050904    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:04 GMT
	I0728 18:46:04.050908    4673 round_trippers.go:580]     Audit-Id: ddb19336-96a6-40ce-8e69-2f220c6f258b
	I0728 18:46:04.051004    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m03","uid":"f2047331-d0da-470e-8da5-7b725a7d5c49","resourceVersion":"818","creationTimestamp":"2024-07-29T01:44:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_44_56_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3142 chars]
	I0728 18:46:04.051199    4673 pod_ready.go:92] pod "kube-proxy-7gm24" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:04.051211    4673 pod_ready.go:81] duration metric: took 400.625478ms for pod "kube-proxy-7gm24" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:04.051219    4673 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dzz6p" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:04.248383    4673 request.go:629] Waited for 197.050186ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzz6p
	I0728 18:46:04.248439    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzz6p
	I0728 18:46:04.248447    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:04.248458    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:04.248467    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:04.251006    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:04.251018    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:04.251025    4673 round_trippers.go:580]     Audit-Id: d41e57d3-dc4f-4a37-ae68-f60ee45146ec
	I0728 18:46:04.251030    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:04.251036    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:04.251041    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:04.251045    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:04.251048    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:04 GMT
	I0728 18:46:04.251220    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dzz6p","generateName":"kube-proxy-","namespace":"kube-system","uid":"577d6ba2-e17a-426f-8315-1688766fa435","resourceVersion":"488","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0728 18:46:04.447854    4673 request.go:629] Waited for 196.288477ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:46:04.447906    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:46:04.447916    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:04.447927    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:04.447932    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:04.450364    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:04.450377    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:04.450384    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:04.450390    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:04.450394    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:04.450398    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:04 GMT
	I0728 18:46:04.450401    4673 round_trippers.go:580]     Audit-Id: 7a71e646-7769-4690-abb8-a1fc8004ec92
	I0728 18:46:04.450404    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:04.450731    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"552","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0728 18:46:04.450951    4673 pod_ready.go:92] pod "kube-proxy-dzz6p" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:04.450964    4673 pod_ready.go:81] duration metric: took 399.741092ms for pod "kube-proxy-dzz6p" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:04.450973    4673 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:04.648036    4673 request.go:629] Waited for 196.965047ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:46:04.648205    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:46:04.648219    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:04.648231    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:04.648240    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:04.650941    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:04.650955    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:04.650964    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:04.650968    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:04 GMT
	I0728 18:46:04.650971    4673 round_trippers.go:580]     Audit-Id: 4c8bfa6a-8729-46af-88f9-50944792e7f9
	I0728 18:46:04.650975    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:04.650978    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:04.650982    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:04.651048    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tz5h5","generateName":"kube-proxy-","namespace":"kube-system","uid":"f791f783-464c-485b-9eda-97a5f857cca4","resourceVersion":"848","creationTimestamp":"2024-07-29T01:40:09Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0728 18:46:04.847040    4673 request.go:629] Waited for 195.669089ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:04.847073    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:04.847078    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:04.847118    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:04.847125    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:04.848826    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:04.848836    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:04.848844    4673 round_trippers.go:580]     Audit-Id: 8509a86f-61ec-49c5-bf04-5a95d1f2faeb
	I0728 18:46:04.848848    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:04.848851    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:04.848865    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:04.848872    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:04.848876    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:04 GMT
	I0728 18:46:04.848962    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:04.849147    4673 pod_ready.go:97] node "multinode-362000" hosting pod "kube-proxy-tz5h5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:04.849158    4673 pod_ready.go:81] duration metric: took 398.181075ms for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	E0728 18:46:04.849164    4673 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-362000" hosting pod "kube-proxy-tz5h5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:04.849169    4673 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:05.048177    4673 request.go:629] Waited for 198.951574ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:46:05.048369    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:46:05.048380    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:05.048391    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:05.048398    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:05.051192    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:05.051214    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:05.051222    4673 round_trippers.go:580]     Audit-Id: bf0f2a0e-9e62-4bce-9dd6-d7e45a1792ae
	I0728 18:46:05.051225    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:05.051229    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:05.051234    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:05.051238    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:05.051241    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:05 GMT
	I0728 18:46:05.051520    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-362000","namespace":"kube-system","uid":"0299d0c0-d45d-45ee-9b8e-b5900e92694b","resourceVersion":"834","creationTimestamp":"2024-07-29T01:39:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.mirror":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.seen":"2024-07-29T01:39:50.867492603Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5438 chars]
	I0728 18:46:05.248795    4673 request.go:629] Waited for 196.950351ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:05.248895    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:05.248904    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:05.248915    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:05.248924    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:05.251844    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:05.251859    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:05.251866    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:05.251872    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:05.251876    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:05.251880    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:05 GMT
	I0728 18:46:05.251883    4673 round_trippers.go:580]     Audit-Id: fd53ac52-36c8-4a36-9c98-1e5e3bfbc51a
	I0728 18:46:05.251887    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:05.252200    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:05.252455    4673 pod_ready.go:97] node "multinode-362000" hosting pod "kube-scheduler-multinode-362000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:05.252472    4673 pod_ready.go:81] duration metric: took 403.300338ms for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	E0728 18:46:05.252482    4673 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-362000" hosting pod "kube-scheduler-multinode-362000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:05.252489    4673 pod_ready.go:38] duration metric: took 1.687030242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:46:05.252503    4673 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 18:46:05.263413    4673 command_runner.go:130] > -16
	I0728 18:46:05.263565    4673 ops.go:34] apiserver oom_adj: -16
	I0728 18:46:05.263572    4673 kubeadm.go:597] duration metric: took 8.547706097s to restartPrimaryControlPlane
	I0728 18:46:05.263578    4673 kubeadm.go:394] duration metric: took 8.568533174s to StartCluster
	I0728 18:46:05.263587    4673 settings.go:142] acquiring lock: {Name:mk9218fe520c81adf28e6207ae402102e10a5d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:46:05.263676    4673 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:46:05.264048    4673 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/kubeconfig: {Name:mk76ac5b4283108fca1a66cc5cd0791fbea0691d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:46:05.264314    4673 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:46:05.264327    4673 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0728 18:46:05.264447    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:05.308225    4673 out.go:177] * Verifying Kubernetes components...
	I0728 18:46:05.352178    4673 out.go:177] * Enabled addons: 
	I0728 18:46:05.373489    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:46:05.394137    4673 addons.go:510] duration metric: took 129.814599ms for enable addons: enabled=[]
	I0728 18:46:05.530364    4673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:46:05.542856    4673 node_ready.go:35] waiting up to 6m0s for node "multinode-362000" to be "Ready" ...
	I0728 18:46:05.542913    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:05.542919    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:05.542925    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:05.542928    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:05.544173    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:05.544182    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:05.544213    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:05.544218    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:05.544225    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:05 GMT
	I0728 18:46:05.544230    4673 round_trippers.go:580]     Audit-Id: 66ef8f37-b5be-468d-b667-dbe16d791ac7
	I0728 18:46:05.544235    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:05.544240    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:05.544353    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:06.045063    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:06.045088    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:06.045193    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:06.045205    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:06.047605    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:06.047617    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:06.047624    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:06 GMT
	I0728 18:46:06.047656    4673 round_trippers.go:580]     Audit-Id: 9818c57c-bafc-44d6-aa00-dbbe6b602d92
	I0728 18:46:06.047664    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:06.047669    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:06.047672    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:06.047676    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:06.047962    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:06.543394    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:06.543422    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:06.543434    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:06.543447    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:06.546471    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:06.546488    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:06.546496    4673 round_trippers.go:580]     Audit-Id: 52a69848-4d8a-4a54-9897-b751d38ecd7e
	I0728 18:46:06.546508    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:06.546513    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:06.546518    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:06.546522    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:06.546525    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:06 GMT
	I0728 18:46:06.546626    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:07.045034    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:07.045060    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:07.045071    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:07.045080    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:07.048063    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:07.048078    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:07.048086    4673 round_trippers.go:580]     Audit-Id: 00ba8ff8-1b8a-42a3-93d1-01013382ba46
	I0728 18:46:07.048091    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:07.048094    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:07.048098    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:07.048101    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:07.048104    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:07 GMT
	I0728 18:46:07.048177    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:07.542976    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:07.542992    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:07.543001    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:07.543034    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:07.545070    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:07.545081    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:07.545097    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:07.545107    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:07.545114    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:07 GMT
	I0728 18:46:07.545118    4673 round_trippers.go:580]     Audit-Id: a7499019-f86f-4dbe-bd14-355c8cb89d10
	I0728 18:46:07.545123    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:07.545125    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:07.545268    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:07.545471    4673 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:46:08.045026    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:08.045053    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:08.045064    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:08.045072    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:08.047835    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:08.047851    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:08.047859    4673 round_trippers.go:580]     Audit-Id: fba0841d-ff46-4a3e-b939-14742d3a686e
	I0728 18:46:08.047863    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:08.047866    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:08.047869    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:08.047873    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:08.047877    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:08 GMT
	I0728 18:46:08.047960    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:08.544997    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:08.545024    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:08.545036    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:08.545041    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:08.547615    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:08.547630    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:08.547637    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:08.547641    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:08.547645    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:08.547649    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:08 GMT
	I0728 18:46:08.547652    4673 round_trippers.go:580]     Audit-Id: 164f8393-9fb5-4806-9c52-38422b7a7b30
	I0728 18:46:08.547657    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:08.547719    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:09.045022    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:09.045062    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:09.045074    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:09.045080    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:09.047760    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:09.047776    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:09.047783    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:09 GMT
	I0728 18:46:09.047787    4673 round_trippers.go:580]     Audit-Id: ebc0b325-15ad-4407-8d3e-743ff9541e92
	I0728 18:46:09.047791    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:09.047796    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:09.047799    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:09.047803    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:09.047997    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:09.544321    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:09.544349    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:09.544395    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:09.544404    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:09.547091    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:09.547106    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:09.547113    4673 round_trippers.go:580]     Audit-Id: d43958a7-5d3d-48bf-ba82-ef8580d5b782
	I0728 18:46:09.547117    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:09.547121    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:09.547124    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:09.547128    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:09.547131    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:09 GMT
	I0728 18:46:09.547395    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:09.547642    4673 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:46:10.044999    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:10.045013    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:10.045018    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:10.045021    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:10.046893    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:10.046917    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:10.046934    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:10.046942    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:10.046953    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:10 GMT
	I0728 18:46:10.046976    4673 round_trippers.go:580]     Audit-Id: feeb560d-fd24-4434-b96e-9fe8fa976c83
	I0728 18:46:10.046983    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:10.046987    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:10.047155    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:10.543294    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:10.543320    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:10.543332    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:10.543338    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:10.546065    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:10.546079    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:10.546086    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:10.546090    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:10.546093    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:10.546095    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:10 GMT
	I0728 18:46:10.546099    4673 round_trippers.go:580]     Audit-Id: db3dc4e2-d90d-4816-aba5-bce00fdddf97
	I0728 18:46:10.546102    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:10.546179    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:11.045037    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:11.045064    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:11.045076    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:11.045081    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:11.047659    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:11.047676    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:11.047686    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:11.047693    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:11.047699    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:11.047706    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:11.047711    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:11 GMT
	I0728 18:46:11.047718    4673 round_trippers.go:580]     Audit-Id: c9186c44-34f5-4dd1-b086-2c827930ebc5
	I0728 18:46:11.047877    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:11.543581    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:11.543606    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:11.543618    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:11.543631    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:11.546284    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:11.546298    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:11.546305    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:11.546310    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:11 GMT
	I0728 18:46:11.546313    4673 round_trippers.go:580]     Audit-Id: 406c9862-f9db-46fc-a80a-e05dc2cf11a8
	I0728 18:46:11.546317    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:11.546321    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:11.546324    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:11.546458    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:12.045001    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:12.045030    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:12.045041    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:12.045048    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:12.048347    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:12.048363    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:12.048370    4673 round_trippers.go:580]     Audit-Id: 4bb1c035-73c4-4e29-bc62-41b55a590965
	I0728 18:46:12.048374    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:12.048390    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:12.048395    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:12.048400    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:12.048405    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:12 GMT
	I0728 18:46:12.048488    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:12.048729    4673 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:46:12.542964    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:12.542981    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:12.543046    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:12.543052    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:12.545156    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:12.545166    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:12.545171    4673 round_trippers.go:580]     Audit-Id: ff5f0769-b432-4e44-ac9e-4fa1719357f5
	I0728 18:46:12.545175    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:12.545178    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:12.545182    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:12.545185    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:12.545188    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:12 GMT
	I0728 18:46:12.545411    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:13.044062    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:13.044089    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:13.044182    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:13.044190    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:13.046868    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:13.046883    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:13.046894    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:13.046903    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:13.046914    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:13 GMT
	I0728 18:46:13.046920    4673 round_trippers.go:580]     Audit-Id: c683fcdd-13f9-4ea4-9cee-0f3ac197efb2
	I0728 18:46:13.046923    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:13.046926    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:13.047256    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:13.542932    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:13.542998    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:13.543006    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:13.543010    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:13.544665    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:13.544688    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:13.544700    4673 round_trippers.go:580]     Audit-Id: c94d2f3f-ca72-4339-9b49-02f96670c69c
	I0728 18:46:13.544721    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:13.544728    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:13.544732    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:13.544778    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:13.544785    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:13 GMT
	I0728 18:46:13.544832    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:14.043459    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:14.043485    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:14.043577    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:14.043587    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:14.045897    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:14.045910    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:14.045917    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:14 GMT
	I0728 18:46:14.045922    4673 round_trippers.go:580]     Audit-Id: cc6159a8-d060-4bc8-9987-6613ff0cb383
	I0728 18:46:14.045926    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:14.045930    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:14.045934    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:14.045953    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:14.046215    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:14.544073    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:14.544102    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:14.544114    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:14.544201    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:14.547148    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:14.547167    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:14.547178    4673 round_trippers.go:580]     Audit-Id: 7b5e91a8-420b-4520-9a79-0e253be262cb
	I0728 18:46:14.547185    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:14.547201    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:14.547208    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:14.547213    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:14.547217    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:14 GMT
	I0728 18:46:14.547504    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:14.547766    4673 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:46:15.044502    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:15.044530    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:15.044543    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:15.044551    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:15.047500    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:15.047514    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:15.047521    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:15 GMT
	I0728 18:46:15.047527    4673 round_trippers.go:580]     Audit-Id: f7fe108e-d3e8-4a4b-9795-95c1dcb8cdd2
	I0728 18:46:15.047532    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:15.047539    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:15.047545    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:15.047551    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:15.047655    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:15.543085    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:15.543105    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:15.543113    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:15.543122    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:15.544978    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:15.544987    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:15.544991    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:15.544995    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:15.544998    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:15.545000    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:15 GMT
	I0728 18:46:15.545003    4673 round_trippers.go:580]     Audit-Id: bcbbb740-f897-4438-bb14-d4489110f159
	I0728 18:46:15.545007    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:15.545130    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:16.045034    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:16.045071    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:16.045116    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:16.045125    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:16.047934    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:16.047952    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:16.047963    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:16 GMT
	I0728 18:46:16.047971    4673 round_trippers.go:580]     Audit-Id: 42350baf-ccaf-4bed-a159-5420db3fe12b
	I0728 18:46:16.047978    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:16.047982    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:16.047986    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:16.047989    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:16.048124    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:16.543832    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:16.543853    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:16.543861    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:16.543864    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:16.545777    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:16.545792    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:16.545801    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:16.545807    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:16.545811    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:16.545815    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:16.545818    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:16 GMT
	I0728 18:46:16.545822    4673 round_trippers.go:580]     Audit-Id: adc6d060-417c-4d7c-b414-131fcc6c1c96
	I0728 18:46:16.546040    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:17.043321    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:17.043347    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:17.043441    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:17.043451    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:17.046058    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:17.046073    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:17.046081    4673 round_trippers.go:580]     Audit-Id: e41093a7-258f-49d1-93f6-7f6fe0f09aa3
	I0728 18:46:17.046085    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:17.046088    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:17.046092    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:17.046096    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:17.046099    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:17 GMT
	I0728 18:46:17.046247    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:17.046497    4673 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:46:17.543916    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:17.543943    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:17.543957    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:17.543965    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:17.546752    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:17.546767    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:17.546774    4673 round_trippers.go:580]     Audit-Id: e7f82c3f-840a-49fa-aa8b-00d2a86a7d20
	I0728 18:46:17.546780    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:17.546784    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:17.546787    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:17.546790    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:17.546794    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:17 GMT
	I0728 18:46:17.547105    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:18.043703    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:18.043723    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:18.043731    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:18.043791    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:18.045772    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:18.045796    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:18.045810    4673 round_trippers.go:580]     Audit-Id: ecc25900-df6d-4446-b249-c76fa67dcd39
	I0728 18:46:18.045816    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:18.045825    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:18.045831    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:18.045835    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:18.045840    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:18 GMT
	I0728 18:46:18.045937    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:18.544010    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:18.544040    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:18.544052    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:18.544059    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:18.546600    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:18.546616    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:18.546625    4673 round_trippers.go:580]     Audit-Id: 0b58c9b9-9688-49a3-ad30-7a5c2d538759
	I0728 18:46:18.546630    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:18.546636    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:18.546641    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:18.546645    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:18.546650    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:18 GMT
	I0728 18:46:18.546707    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:19.042938    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:19.042965    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:19.042976    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:19.042982    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:19.045332    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:19.045340    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:19.045345    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:19 GMT
	I0728 18:46:19.045348    4673 round_trippers.go:580]     Audit-Id: c598eca3-7a23-438d-9341-9d98f07cedfe
	I0728 18:46:19.045350    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:19.045352    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:19.045361    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:19.045366    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:19.045630    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:19.544343    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:19.544371    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:19.544461    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:19.544473    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:19.546997    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:19.547010    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:19.547017    4673 round_trippers.go:580]     Audit-Id: 9166962d-daa5-425b-a6d4-09359cea1a45
	I0728 18:46:19.547021    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:19.547026    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:19.547029    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:19.547034    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:19.547038    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:19 GMT
	I0728 18:46:19.547331    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:19.547581    4673 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:46:20.044349    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:20.044373    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:20.044384    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:20.044390    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:20.046974    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:20.046987    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:20.046994    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:20.047001    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:20 GMT
	I0728 18:46:20.047007    4673 round_trippers.go:580]     Audit-Id: 2ff15e4a-528c-4f84-80b8-e1b7a73a838f
	I0728 18:46:20.047012    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:20.047018    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:20.047023    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:20.047478    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:20.542912    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:20.542939    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:20.542948    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:20.542953    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:20.545341    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:20.545353    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:20.545359    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:20 GMT
	I0728 18:46:20.545364    4673 round_trippers.go:580]     Audit-Id: 64b92245-bfe4-4339-bf04-c3a08894fd2e
	I0728 18:46:20.545369    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:20.545374    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:20.545378    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:20.545382    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:20.545554    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:21.043514    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:21.043545    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:21.043606    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:21.043618    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:21.046644    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:21.046658    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:21.046665    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:21.046670    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:21 GMT
	I0728 18:46:21.046673    4673 round_trippers.go:580]     Audit-Id: 156a8527-9325-47dd-be01-940ee9577457
	I0728 18:46:21.046676    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:21.046681    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:21.046683    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:21.046773    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:21.544893    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:21.544917    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:21.544925    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:21.544932    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:21.547195    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:21.547207    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:21.547212    4673 round_trippers.go:580]     Audit-Id: 1a1d8322-6fe7-420f-96f6-20f97811bff9
	I0728 18:46:21.547215    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:21.547218    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:21.547220    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:21.547223    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:21.547226    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:21 GMT
	I0728 18:46:21.547272    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:22.043695    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:22.043722    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:22.043734    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:22.043739    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:22.046854    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:22.046873    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:22.046883    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:22 GMT
	I0728 18:46:22.046888    4673 round_trippers.go:580]     Audit-Id: f6098c3a-d83f-49f6-95a2-2d2ab872a960
	I0728 18:46:22.046893    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:22.046898    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:22.046903    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:22.046909    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:22.047092    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"977","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0728 18:46:22.047354    4673 node_ready.go:49] node "multinode-362000" has status "Ready":"True"
	I0728 18:46:22.047370    4673 node_ready.go:38] duration metric: took 16.504612091s for node "multinode-362000" to be "Ready" ...
	I0728 18:46:22.047378    4673 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:46:22.047429    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:46:22.047437    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:22.047445    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:22.047450    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:22.050643    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:22.050651    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:22.050656    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:22 GMT
	I0728 18:46:22.050659    4673 round_trippers.go:580]     Audit-Id: fa55cad0-b4a0-4db3-b378-422236819354
	I0728 18:46:22.050662    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:22.050664    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:22.050667    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:22.050670    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:22.051145    4673 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"979"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 86038 chars]
	I0728 18:46:22.052933    4673 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:22.052975    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:22.052979    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:22.052985    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:22.052988    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:22.054355    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:22.054363    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:22.054370    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:22.054377    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:22.054384    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:22 GMT
	I0728 18:46:22.054391    4673 round_trippers.go:580]     Audit-Id: bd91cc36-2b77-40a4-8b32-409126ce244b
	I0728 18:46:22.054395    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:22.054398    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:22.054542    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:22.054767    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:22.054774    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:22.054779    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:22.054783    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:22.055806    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:22.055813    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:22.055818    4673 round_trippers.go:580]     Audit-Id: 3e3e627c-ffec-47fb-a34b-cb5ff0d6669c
	I0728 18:46:22.055823    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:22.055826    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:22.055829    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:22.055831    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:22.055833    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:22 GMT
	I0728 18:46:22.056066    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"977","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0728 18:46:22.554071    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:22.554097    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:22.554145    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:22.554156    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:22.556458    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:22.556469    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:22.556476    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:22 GMT
	I0728 18:46:22.556481    4673 round_trippers.go:580]     Audit-Id: b030f475-f46c-43b2-8772-36bcbd61b75f
	I0728 18:46:22.556485    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:22.556489    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:22.556495    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:22.556502    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:22.556735    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:22.557097    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:22.557107    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:22.557115    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:22.557120    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:22.558345    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:22.558352    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:22.558357    4673 round_trippers.go:580]     Audit-Id: 97147439-01b2-480c-b13c-f913be98c3b8
	I0728 18:46:22.558360    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:22.558386    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:22.558394    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:22.558398    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:22.558401    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:22 GMT
	I0728 18:46:22.558548    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"977","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0728 18:46:23.053402    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:23.053422    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:23.053430    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:23.053434    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:23.055689    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:23.055698    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:23.055709    4673 round_trippers.go:580]     Audit-Id: 491a4ccd-431a-4ad9-9d73-d3a7074f9904
	I0728 18:46:23.055712    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:23.055714    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:23.055717    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:23.055719    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:23.055722    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:23 GMT
	I0728 18:46:23.055924    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:23.056254    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:23.056261    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:23.056270    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:23.056275    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:23.057415    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:23.057425    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:23.057432    4673 round_trippers.go:580]     Audit-Id: 8f0e602b-4ea6-42ef-a5b9-b6c7d880f2c7
	I0728 18:46:23.057436    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:23.057440    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:23.057446    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:23.057449    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:23.057451    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:23 GMT
	I0728 18:46:23.057592    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"977","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0728 18:46:23.554711    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:23.554733    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:23.554745    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:23.554759    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:23.557957    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:23.557970    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:23.557977    4673 round_trippers.go:580]     Audit-Id: 717f925b-80d5-4626-84b9-606a908e4e27
	I0728 18:46:23.557985    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:23.557990    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:23.557996    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:23.558005    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:23.558008    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:23 GMT
	I0728 18:46:23.558567    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:23.558839    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:23.558846    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:23.558852    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:23.558856    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:23.560057    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:23.560064    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:23.560069    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:23 GMT
	I0728 18:46:23.560074    4673 round_trippers.go:580]     Audit-Id: db27b253-9c23-4534-9325-e325e18fc3d5
	I0728 18:46:23.560076    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:23.560078    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:23.560081    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:23.560085    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:23.560237    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"977","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0728 18:46:24.054520    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:24.054542    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:24.054552    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:24.054557    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:24.057200    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:24.057213    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:24.057224    4673 round_trippers.go:580]     Audit-Id: 17c96eab-dffc-4441-b9ee-1dd665695d72
	I0728 18:46:24.057233    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:24.057241    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:24.057247    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:24.057252    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:24.057258    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:24 GMT
	I0728 18:46:24.057613    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:24.057994    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:24.058004    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:24.058011    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:24.058017    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:24.059280    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:24.059291    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:24.059298    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:24 GMT
	I0728 18:46:24.059304    4673 round_trippers.go:580]     Audit-Id: 5663cc51-fa18-4198-b64e-c612f733851c
	I0728 18:46:24.059310    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:24.059316    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:24.059320    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:24.059324    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:24.059465    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:24.059632    4673 pod_ready.go:102] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"False"
	I0728 18:46:24.553738    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:24.553763    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:24.553775    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:24.553781    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:24.556893    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:24.556909    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:24.556917    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:24.556921    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:24 GMT
	I0728 18:46:24.556925    4673 round_trippers.go:580]     Audit-Id: 9bab4c2d-b47a-4697-b09d-5c325f3feecc
	I0728 18:46:24.556928    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:24.556931    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:24.556935    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:24.557111    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:24.557472    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:24.557482    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:24.557490    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:24.557495    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:24.559105    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:24.559115    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:24.559122    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:24.559140    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:24.559151    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:24.559154    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:24.559158    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:24 GMT
	I0728 18:46:24.559162    4673 round_trippers.go:580]     Audit-Id: 0cf3c394-70e7-4dff-aeeb-deb2dfb8026a
	I0728 18:46:24.559247    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:25.053655    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:25.053679    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:25.053691    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:25.053700    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:25.056720    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:25.056734    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:25.056742    4673 round_trippers.go:580]     Audit-Id: ad6aff22-1872-40c7-ab07-f98be348c2a5
	I0728 18:46:25.056747    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:25.056752    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:25.056755    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:25.056778    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:25.056787    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:25 GMT
	I0728 18:46:25.056887    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:25.057257    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:25.057267    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:25.057275    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:25.057279    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:25.058603    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:25.058612    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:25.058617    4673 round_trippers.go:580]     Audit-Id: 1e5b5f2e-2153-4e8a-9193-57881f898e21
	I0728 18:46:25.058619    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:25.058621    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:25.058624    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:25.058627    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:25.058629    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:25 GMT
	I0728 18:46:25.058699    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:25.555099    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:25.555207    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:25.555221    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:25.555230    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:25.557854    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:25.557868    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:25.557876    4673 round_trippers.go:580]     Audit-Id: 9384fbe9-b470-4854-8550-7024491f3972
	I0728 18:46:25.557880    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:25.557887    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:25.557892    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:25.557914    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:25.557922    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:25 GMT
	I0728 18:46:25.558059    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:25.558438    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:25.558447    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:25.558456    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:25.558460    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:25.560004    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:25.560014    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:25.560019    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:25 GMT
	I0728 18:46:25.560022    4673 round_trippers.go:580]     Audit-Id: fb41d6a2-1911-416d-a270-4d454e97ad25
	I0728 18:46:25.560025    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:25.560028    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:25.560031    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:25.560034    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:25.560103    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:26.053984    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:26.054008    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:26.054019    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:26.054025    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:26.056944    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:26.056960    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:26.056967    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:26.056972    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:26.056997    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:26 GMT
	I0728 18:46:26.057012    4673 round_trippers.go:580]     Audit-Id: 7588d8da-bacc-4c81-bfe2-ebe25cc09f3d
	I0728 18:46:26.057019    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:26.057024    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:26.057351    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:26.057725    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:26.057736    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:26.057744    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:26.057748    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:26.059216    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:26.059227    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:26.059232    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:26.059236    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:26 GMT
	I0728 18:46:26.059239    4673 round_trippers.go:580]     Audit-Id: 8f9da684-b57b-439e-ac10-be76459af05b
	I0728 18:46:26.059242    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:26.059246    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:26.059249    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:26.059305    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:26.554231    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:26.554247    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:26.554253    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:26.554257    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:26.555906    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:26.555917    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:26.555922    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:26.555925    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:26 GMT
	I0728 18:46:26.555928    4673 round_trippers.go:580]     Audit-Id: 15c156d6-277e-4b52-ad65-7a9340a270c1
	I0728 18:46:26.555930    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:26.555940    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:26.555944    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:26.556055    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:26.556328    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:26.556335    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:26.556341    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:26.556344    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:26.557440    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:26.557448    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:26.557453    4673 round_trippers.go:580]     Audit-Id: db74b62a-5a02-4079-bf52-19c4202782da
	I0728 18:46:26.557456    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:26.557459    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:26.557461    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:26.557463    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:26.557466    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:26 GMT
	I0728 18:46:26.557523    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:26.557688    4673 pod_ready.go:102] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"False"
	I0728 18:46:27.053477    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:27.053503    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:27.053515    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:27.053524    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:27.056453    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:27.056470    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:27.056478    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:27.056482    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:27 GMT
	I0728 18:46:27.056496    4673 round_trippers.go:580]     Audit-Id: bd8354b4-9607-4a9f-b2bd-21c1e0cb9963
	I0728 18:46:27.056502    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:27.056507    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:27.056510    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:27.056582    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:27.056941    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:27.056950    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:27.056958    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:27.056962    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:27.058137    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:27.058143    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:27.058148    4673 round_trippers.go:580]     Audit-Id: ef1b79bd-a3ec-4c52-8da5-a52b3e48e6c4
	I0728 18:46:27.058151    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:27.058155    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:27.058157    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:27.058160    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:27.058163    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:27 GMT
	I0728 18:46:27.058232    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:27.554641    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:27.554665    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:27.554677    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:27.554685    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:27.557542    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:27.557577    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:27.557627    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:27.557639    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:27.557644    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:27 GMT
	I0728 18:46:27.557649    4673 round_trippers.go:580]     Audit-Id: 98fa8e01-106c-4f9a-8071-9062e7046442
	I0728 18:46:27.557653    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:27.557657    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:27.557749    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:27.558101    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:27.558117    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:27.558125    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:27.558128    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:27.559585    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:27.559595    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:27.559600    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:27.559604    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:27.559618    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:27.559626    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:27.559629    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:27 GMT
	I0728 18:46:27.559632    4673 round_trippers.go:580]     Audit-Id: 30595a70-2a97-4d70-9a57-53950c643d7c
	I0728 18:46:27.559729    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:28.053340    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:28.053364    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:28.053375    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:28.053380    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:28.056222    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:28.056251    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:28.056286    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:28.056301    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:28.056317    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:28.056321    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:28 GMT
	I0728 18:46:28.056324    4673 round_trippers.go:580]     Audit-Id: 3673f36a-ebda-425c-866a-bf425359b217
	I0728 18:46:28.056329    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:28.056421    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:28.056783    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:28.056793    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:28.056801    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:28.056807    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:28.058118    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:28.058127    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:28.058132    4673 round_trippers.go:580]     Audit-Id: c3041389-cdd8-440d-9b50-8f38a62bedfe
	I0728 18:46:28.058148    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:28.058154    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:28.058160    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:28.058165    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:28.058169    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:28 GMT
	I0728 18:46:28.058236    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:28.554811    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:28.554836    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:28.554891    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:28.554900    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:28.557398    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:28.557410    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:28.557418    4673 round_trippers.go:580]     Audit-Id: 946aa63a-5b25-418d-bd8f-cdd3170e04c1
	I0728 18:46:28.557423    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:28.557428    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:28.557433    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:28.557440    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:28.557448    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:28 GMT
	I0728 18:46:28.557737    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:28.558090    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:28.558100    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:28.558110    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:28.558114    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:28.559676    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:28.559683    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:28.559688    4673 round_trippers.go:580]     Audit-Id: 46fbf5c9-1fdc-44e1-9cfa-b6c0a3ffac88
	I0728 18:46:28.559691    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:28.559694    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:28.559698    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:28.559701    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:28.559703    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:28 GMT
	I0728 18:46:28.559987    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:28.560161    4673 pod_ready.go:102] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"False"
	I0728 18:46:29.053744    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:29.053767    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:29.053778    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:29.053785    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:29.056546    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:29.056558    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:29.056564    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:29.056569    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:29.056574    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:29 GMT
	I0728 18:46:29.056580    4673 round_trippers.go:580]     Audit-Id: 0d70e27a-e492-438a-b910-b79155503968
	I0728 18:46:29.056587    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:29.056591    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:29.056671    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:29.057029    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:29.057038    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:29.057047    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:29.057050    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:29.058440    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:29.058449    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:29.058454    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:29.058457    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:29.058461    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:29.058463    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:29 GMT
	I0728 18:46:29.058466    4673 round_trippers.go:580]     Audit-Id: 82784609-d28e-44b7-8b59-36ee33cd266d
	I0728 18:46:29.058470    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:29.058524    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:29.553102    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:29.553122    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:29.553131    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:29.553137    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:29.555204    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:29.555214    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:29.555222    4673 round_trippers.go:580]     Audit-Id: e00ff64f-7619-4c62-a778-75045c1c7929
	I0728 18:46:29.555228    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:29.555235    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:29.555239    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:29.555244    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:29.555249    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:29 GMT
	I0728 18:46:29.555444    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:29.555835    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:29.555842    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:29.555867    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:29.555871    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:29.556965    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:29.556973    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:29.556977    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:29.556980    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:29.556983    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:29.556985    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:29.556987    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:29 GMT
	I0728 18:46:29.556990    4673 round_trippers.go:580]     Audit-Id: 1f84a3a8-16b2-46ae-a170-de3a8478c9ce
	I0728 18:46:29.557155    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:30.054958    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:30.054987    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:30.055028    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:30.055056    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:30.057543    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:30.057558    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:30.057566    4673 round_trippers.go:580]     Audit-Id: 895d7393-eae6-446d-a2db-ca87945e4250
	I0728 18:46:30.057570    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:30.057575    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:30.057585    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:30.057588    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:30.057593    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:30 GMT
	I0728 18:46:30.057759    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:30.058124    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:30.058135    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:30.058142    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:30.058155    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:30.059555    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:30.059566    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:30.059572    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:30 GMT
	I0728 18:46:30.059576    4673 round_trippers.go:580]     Audit-Id: 9a197be8-58f8-4102-982b-89136f4cd198
	I0728 18:46:30.059580    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:30.059583    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:30.059586    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:30.059589    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:30.059964    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:30.554278    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:30.554292    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:30.554297    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:30.554301    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:30.556057    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:30.556067    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:30.556072    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:30.556075    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:30 GMT
	I0728 18:46:30.556077    4673 round_trippers.go:580]     Audit-Id: 48db864f-ca2b-426c-ae18-a8e0a382a5a0
	I0728 18:46:30.556080    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:30.556083    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:30.556087    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:30.556323    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:30.556631    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:30.556638    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:30.556644    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:30.556647    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:30.557723    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:30.557732    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:30.557739    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:30.557744    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:30.557748    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:30.557753    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:30 GMT
	I0728 18:46:30.557758    4673 round_trippers.go:580]     Audit-Id: 3ab55ac0-1676-4c2c-961a-c138c0a1662f
	I0728 18:46:30.557763    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:30.557871    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:31.053439    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:31.053462    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:31.053471    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:31.053477    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:31.056264    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:31.056276    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:31.056285    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:31.056290    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:31 GMT
	I0728 18:46:31.056295    4673 round_trippers.go:580]     Audit-Id: 2f1121a9-5bdd-4328-bc9e-25bdc3609014
	I0728 18:46:31.056307    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:31.056312    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:31.056316    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:31.056549    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:31.056933    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:31.056943    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:31.056950    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:31.056955    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:31.058234    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:31.058243    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:31.058248    4673 round_trippers.go:580]     Audit-Id: cb1346b3-8032-445b-b881-e62088da4b16
	I0728 18:46:31.058265    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:31.058270    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:31.058273    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:31.058276    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:31.058279    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:31 GMT
	I0728 18:46:31.058381    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:31.058559    4673 pod_ready.go:102] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"False"
	I0728 18:46:31.553220    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:31.553243    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:31.553256    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:31.553262    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:31.557537    4673 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 18:46:31.557550    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:31.557555    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:31.557558    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:31.557561    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:31.557563    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:31.557566    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:31 GMT
	I0728 18:46:31.557568    4673 round_trippers.go:580]     Audit-Id: 87141df1-a26a-4671-8a8c-11268925a051
	I0728 18:46:31.558343    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:31.558644    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:31.558651    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:31.558661    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:31.558664    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:31.560738    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:31.560748    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:31.560753    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:31 GMT
	I0728 18:46:31.560757    4673 round_trippers.go:580]     Audit-Id: 27273db0-15ee-4dc6-8fb5-ddd50941d5bb
	I0728 18:46:31.560760    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:31.560764    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:31.560767    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:31.560769    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:31.560873    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:32.053170    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:32.053191    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:32.053203    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:32.053208    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:32.055427    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:32.055440    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:32.055447    4673 round_trippers.go:580]     Audit-Id: 0c8a0008-7129-47bf-b950-18080b06b05b
	I0728 18:46:32.055453    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:32.055459    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:32.055466    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:32.055471    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:32.055475    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:32 GMT
	I0728 18:46:32.055747    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:32.056111    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:32.056121    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:32.056129    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:32.056134    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:32.057374    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:32.057385    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:32.057392    4673 round_trippers.go:580]     Audit-Id: f23c9011-be36-4abf-8309-5bdba4eab32a
	I0728 18:46:32.057397    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:32.057400    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:32.057403    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:32.057407    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:32.057411    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:32 GMT
	I0728 18:46:32.057563    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:32.553289    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:32.553311    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:32.553322    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:32.553328    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:32.555771    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:32.555784    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:32.555791    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:32 GMT
	I0728 18:46:32.555794    4673 round_trippers.go:580]     Audit-Id: 0f167ca1-4904-46dd-8c2b-76e9cd0083a6
	I0728 18:46:32.555797    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:32.555800    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:32.555804    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:32.555812    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:32.556047    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:32.556433    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:32.556443    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:32.556451    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:32.556456    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:32.557912    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:32.557921    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:32.557926    4673 round_trippers.go:580]     Audit-Id: aacde14a-6e0c-4e38-b773-e34483fddd92
	I0728 18:46:32.557930    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:32.557934    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:32.557937    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:32.557940    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:32.557943    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:32 GMT
	I0728 18:46:32.558019    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:33.054431    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:33.054454    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:33.054466    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:33.054475    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:33.057190    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:33.057201    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:33.057208    4673 round_trippers.go:580]     Audit-Id: aa7ee3cb-ddab-4d90-ac64-82b9f4a8b7ca
	I0728 18:46:33.057214    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:33.057219    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:33.057223    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:33.057226    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:33.057229    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:33 GMT
	I0728 18:46:33.057644    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:33.058003    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:33.058012    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:33.058017    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:33.058023    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:33.059014    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:46:33.059023    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:33.059028    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:33.059041    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:33.059045    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:33 GMT
	I0728 18:46:33.059048    4673 round_trippers.go:580]     Audit-Id: 382aab00-53b9-4fb6-8552-fe5d40a50ae6
	I0728 18:46:33.059051    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:33.059054    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:33.059166    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:33.059349    4673 pod_ready.go:102] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"False"
	I0728 18:46:33.553011    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:33.553036    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:33.553118    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:33.553129    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:33.555509    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:33.555519    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:33.555549    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:33.555582    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:33.555606    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:33.555616    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:33.555621    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:33 GMT
	I0728 18:46:33.555625    4673 round_trippers.go:580]     Audit-Id: 4be7f138-2fcb-4bee-9667-7c2ce37a2796
	I0728 18:46:33.556023    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:33.556380    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:33.556387    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:33.556393    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:33.556396    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:33.557489    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:33.557497    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:33.557501    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:33.557505    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:33 GMT
	I0728 18:46:33.557508    4673 round_trippers.go:580]     Audit-Id: 0a91cc0e-e80f-4f58-b06c-4643f2e9cba1
	I0728 18:46:33.557512    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:33.557516    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:33.557520    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:33.557674    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:34.053940    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:34.053955    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:34.053961    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:34.053966    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:34.056143    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:34.056161    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:34.056170    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:34.056174    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:34.056178    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:34 GMT
	I0728 18:46:34.056182    4673 round_trippers.go:580]     Audit-Id: 72e823cd-721c-4cbc-973c-e397e6ff85b8
	I0728 18:46:34.056192    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:34.056195    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:34.056325    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:34.056615    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:34.056622    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:34.056627    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:34.056629    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:34.060741    4673 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 18:46:34.060753    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:34.060759    4673 round_trippers.go:580]     Audit-Id: 9dc3456c-f263-47de-8e08-57a1000e34df
	I0728 18:46:34.060762    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:34.060765    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:34.060767    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:34.060776    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:34.060779    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:34 GMT
	I0728 18:46:34.060852    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:34.554369    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:34.554392    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:34.554402    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:34.554409    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:34.557083    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:34.557101    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:34.557111    4673 round_trippers.go:580]     Audit-Id: da982c2d-805c-4f22-97d4-f9af6ab9ff8a
	I0728 18:46:34.557116    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:34.557119    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:34.557123    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:34.557125    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:34.557130    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:34 GMT
	I0728 18:46:34.557219    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:34.557599    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:34.557608    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:34.557616    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:34.557623    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:34.559012    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:34.559026    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:34.559037    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:34.559054    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:34.559065    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:34.559101    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:34 GMT
	I0728 18:46:34.559110    4673 round_trippers.go:580]     Audit-Id: 117c58c2-c316-4dc5-8944-1a741a9e6f82
	I0728 18:46:34.559115    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:34.559231    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:35.054383    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:35.054406    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.054419    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.054424    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.057108    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:35.057128    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.057136    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.057140    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.057144    4673 round_trippers.go:580]     Audit-Id: dacaa515-204d-487c-8c08-421f1408e92f
	I0728 18:46:35.057160    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.057166    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.057169    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.057254    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"1001","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0728 18:46:35.057629    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:35.057638    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.057646    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.057650    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.059123    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:35.059131    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.059136    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.059139    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.059142    4673 round_trippers.go:580]     Audit-Id: da634a22-0db1-48b7-9407-3e90ce62a5ec
	I0728 18:46:35.059144    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.059147    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.059149    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.059235    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:35.059457    4673 pod_ready.go:92] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:35.059478    4673 pod_ready.go:81] duration metric: took 13.006630167s for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.059504    4673 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.059531    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-362000
	I0728 18:46:35.059536    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.059541    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.059558    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.060683    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:35.060689    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.060693    4673 round_trippers.go:580]     Audit-Id: 54dc2837-8cdd-449b-acda-f2d4dfa6063a
	I0728 18:46:35.060697    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.060700    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.060716    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.060721    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.060725    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.060858    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-362000","namespace":"kube-system","uid":"7b75e781-36f1-4f6f-99a4-808974571bcd","resourceVersion":"971","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.mirror":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.seen":"2024-07-29T01:39:56.230156002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6357 chars]
	I0728 18:46:35.061068    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:35.061080    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.061086    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.061090    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.062095    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:46:35.062104    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.062112    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.062142    4673 round_trippers.go:580]     Audit-Id: 378f0039-e672-4a84-a68e-01c8a3cf8201
	I0728 18:46:35.062149    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.062155    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.062159    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.062162    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.062285    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:35.062449    4673 pod_ready.go:92] pod "etcd-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:35.062457    4673 pod_ready.go:81] duration metric: took 2.948208ms for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.062466    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.062501    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-362000
	I0728 18:46:35.062506    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.062511    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.062515    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.063872    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:35.063880    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.063885    4673 round_trippers.go:580]     Audit-Id: 988d663d-2973-4c67-a678-e674a3485aa4
	I0728 18:46:35.063889    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.063892    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.063896    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.063898    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.063900    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.064101    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-362000","namespace":"kube-system","uid":"95b0fc9b-aad1-47ad-ae00-439b4e4b905a","resourceVersion":"961","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.mirror":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.seen":"2024-07-29T01:39:56.230158669Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0728 18:46:35.064330    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:35.064337    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.064342    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.064345    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.065310    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:46:35.065318    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.065322    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.065325    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.065336    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.065339    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.065356    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.065362    4673 round_trippers.go:580]     Audit-Id: aceaf56d-797d-495a-9f24-2c2e1eb93604
	I0728 18:46:35.065495    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:35.065659    4673 pod_ready.go:92] pod "kube-apiserver-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:35.065667    4673 pod_ready.go:81] duration metric: took 3.195535ms for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.065673    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.065702    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-362000
	I0728 18:46:35.065707    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.065712    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.065716    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.066537    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:46:35.066544    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.066550    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.066554    4673 round_trippers.go:580]     Audit-Id: acce14da-783e-48b8-847d-5f0b73f047c8
	I0728 18:46:35.066572    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.066578    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.066581    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.066584    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.066704    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-362000","namespace":"kube-system","uid":"5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9","resourceVersion":"969","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.mirror":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.seen":"2024-07-29T01:39:56.230159555Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0728 18:46:35.066934    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:35.066940    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.066946    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.066950    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.067796    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:46:35.067802    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.067805    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.067808    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.067811    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.067815    4673 round_trippers.go:580]     Audit-Id: 79dcf1d9-dbb7-4576-9e35-15e921ae005c
	I0728 18:46:35.067818    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.067820    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.067978    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:35.068161    4673 pod_ready.go:92] pod "kube-controller-manager-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:35.068168    4673 pod_ready.go:81] duration metric: took 2.490787ms for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.068175    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7gm24" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.068203    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7gm24
	I0728 18:46:35.068208    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.068213    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.068217    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.069147    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:46:35.069155    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.069160    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.069164    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.069168    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.069171    4673 round_trippers.go:580]     Audit-Id: 7fef5e1c-ccad-48d3-bef1-dae798419617
	I0728 18:46:35.069174    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.069177    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.069347    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7gm24","generateName":"kube-proxy-","namespace":"kube-system","uid":"9db42267-b01f-40a3-bf21-c4d8cf6fb372","resourceVersion":"791","creationTimestamp":"2024-07-29T01:44:55Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0728 18:46:35.069575    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m03
	I0728 18:46:35.069582    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.069587    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.069591    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.070457    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:46:35.070465    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.070470    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.070474    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.070485    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.070489    4673 round_trippers.go:580]     Audit-Id: 6a83c6ff-a36b-438b-aefc-653486499cfe
	I0728 18:46:35.070491    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.070494    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.070808    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m03","uid":"f2047331-d0da-470e-8da5-7b725a7d5c49","resourceVersion":"818","creationTimestamp":"2024-07-29T01:44:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_44_56_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3142 chars]
	I0728 18:46:35.070938    4673 pod_ready.go:92] pod "kube-proxy-7gm24" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:35.070945    4673 pod_ready.go:81] duration metric: took 2.764802ms for pod "kube-proxy-7gm24" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.070950    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dzz6p" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.255986    4673 request.go:629] Waited for 185.000378ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzz6p
	I0728 18:46:35.256123    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzz6p
	I0728 18:46:35.256133    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.256143    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.256148    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.258705    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:35.258720    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.258727    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.258731    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.258755    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.258762    4673 round_trippers.go:580]     Audit-Id: 7a8a77c4-9da9-4d4b-b976-46a852f0b4b4
	I0728 18:46:35.258768    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.258771    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.259140    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dzz6p","generateName":"kube-proxy-","namespace":"kube-system","uid":"577d6ba2-e17a-426f-8315-1688766fa435","resourceVersion":"488","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0728 18:46:35.454893    4673 request.go:629] Waited for 195.324739ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:46:35.454962    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:46:35.454972    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.454983    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.454991    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.457268    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:35.457281    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.457288    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.457292    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.457302    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.457307    4673 round_trippers.go:580]     Audit-Id: 8100b577-1beb-4ec4-98d5-6b4144066370
	I0728 18:46:35.457311    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.457314    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.457426    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"552","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0728 18:46:35.457643    4673 pod_ready.go:92] pod "kube-proxy-dzz6p" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:35.457654    4673 pod_ready.go:81] duration metric: took 386.700912ms for pod "kube-proxy-dzz6p" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.457663    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.654375    4673 request.go:629] Waited for 196.660537ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:46:35.654484    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:46:35.654501    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.654513    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.654521    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.656988    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:35.657002    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.657009    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.657013    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.657017    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.657020    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.657024    4673 round_trippers.go:580]     Audit-Id: 935ea276-b47a-4af4-801e-20cc74a065b8
	I0728 18:46:35.657029    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.657215    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tz5h5","generateName":"kube-proxy-","namespace":"kube-system","uid":"f791f783-464c-485b-9eda-97a5f857cca4","resourceVersion":"974","creationTimestamp":"2024-07-29T01:40:09Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0728 18:46:35.854776    4673 request.go:629] Waited for 197.226685ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:35.854827    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:35.854835    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.854844    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.854850    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.857440    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:35.857453    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.857460    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.857469    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.857475    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.857479    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:36 GMT
	I0728 18:46:35.857484    4673 round_trippers.go:580]     Audit-Id: ab6ca91d-53c7-4f2f-86d8-40bae10da2d6
	I0728 18:46:35.857497    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.857891    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:35.858153    4673 pod_ready.go:92] pod "kube-proxy-tz5h5" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:35.858165    4673 pod_ready.go:81] duration metric: took 400.49922ms for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.858174    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:36.054446    4673 request.go:629] Waited for 196.226607ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:46:36.054567    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:46:36.054580    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:36.054591    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:36.054598    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:36.057082    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:36.057096    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:36.057104    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:36.057108    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:36.057112    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:36.057116    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:36.057119    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:36 GMT
	I0728 18:46:36.057123    4673 round_trippers.go:580]     Audit-Id: f9db7be3-b20c-42bc-a3b7-a9c9b502c232
	I0728 18:46:36.057234    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-362000","namespace":"kube-system","uid":"0299d0c0-d45d-45ee-9b8e-b5900e92694b","resourceVersion":"970","creationTimestamp":"2024-07-29T01:39:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.mirror":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.seen":"2024-07-29T01:39:50.867492603Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0728 18:46:36.255540    4673 request.go:629] Waited for 197.989919ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:36.255579    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:36.255589    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:36.255598    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:36.255604    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:36.257516    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:36.257528    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:36.257534    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:36.257538    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:36.257541    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:36.257545    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:36 GMT
	I0728 18:46:36.257548    4673 round_trippers.go:580]     Audit-Id: 25cd92d4-31ad-4d49-90d7-18d54faddb30
	I0728 18:46:36.257552    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:36.257853    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:36.258126    4673 pod_ready.go:92] pod "kube-scheduler-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:36.258135    4673 pod_ready.go:81] duration metric: took 399.957319ms for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:36.258141    4673 pod_ready.go:38] duration metric: took 14.210858858s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:46:36.258155    4673 api_server.go:52] waiting for apiserver process to appear ...
	I0728 18:46:36.258205    4673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:46:36.272439    4673 command_runner.go:130] > 1742
	I0728 18:46:36.272689    4673 api_server.go:72] duration metric: took 31.008584578s to wait for apiserver process to appear ...
	I0728 18:46:36.272698    4673 api_server.go:88] waiting for apiserver healthz status ...
	I0728 18:46:36.272707    4673 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:46:36.276033    4673 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0728 18:46:36.276063    4673 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0728 18:46:36.276067    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:36.276085    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:36.276093    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:36.276656    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:46:36.276664    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:36.276669    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:36.276678    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:36.276682    4673 round_trippers.go:580]     Content-Length: 263
	I0728 18:46:36.276685    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:36 GMT
	I0728 18:46:36.276694    4673 round_trippers.go:580]     Audit-Id: 5a8e0660-3971-49a5-be49-8d5b3568bdfb
	I0728 18:46:36.276702    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:36.276704    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:36.276718    4673 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0728 18:46:36.276739    4673 api_server.go:141] control plane version: v1.30.3
	I0728 18:46:36.276748    4673 api_server.go:131] duration metric: took 4.045315ms to wait for apiserver health ...
	I0728 18:46:36.276752    4673 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 18:46:36.454343    4673 request.go:629] Waited for 177.560441ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:46:36.454475    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:46:36.454480    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:36.454531    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:36.454534    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:36.457584    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:36.457596    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:36.457604    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:36.457609    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:36 GMT
	I0728 18:46:36.457616    4673 round_trippers.go:580]     Audit-Id: 00b0fcfe-ae65-4915-980b-2ee6e8c13970
	I0728 18:46:36.457620    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:36.457622    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:36.457633    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:36.458950    4673 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1008"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"1001","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86400 chars]
	I0728 18:46:36.461082    4673 system_pods.go:59] 12 kube-system pods found
	I0728 18:46:36.461117    4673 system_pods.go:61] "coredns-7db6d8ff4d-8npcw" [a0fcbb6f-1182-4d9e-bc04-456f1b4de1db] Running
	I0728 18:46:36.461120    4673 system_pods.go:61] "etcd-multinode-362000" [7b75e781-36f1-4f6f-99a4-808974571bcd] Running
	I0728 18:46:36.461123    4673 system_pods.go:61] "kindnet-4mw5v" [053773ee-043a-48e0-9f70-411430b19acd] Running
	I0728 18:46:36.461128    4673 system_pods.go:61] "kindnet-5dhhf" [e124802a-dbb6-4100-8c49-8a75ea05217a] Running
	I0728 18:46:36.461133    4673 system_pods.go:61] "kindnet-8hhwv" [487e32b7-7175-4187-89ba-90bb4d597681] Running
	I0728 18:46:36.461136    4673 system_pods.go:61] "kube-apiserver-multinode-362000" [95b0fc9b-aad1-47ad-ae00-439b4e4b905a] Running
	I0728 18:46:36.461143    4673 system_pods.go:61] "kube-controller-manager-multinode-362000" [5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9] Running
	I0728 18:46:36.461147    4673 system_pods.go:61] "kube-proxy-7gm24" [9db42267-b01f-40a3-bf21-c4d8cf6fb372] Running
	I0728 18:46:36.461149    4673 system_pods.go:61] "kube-proxy-dzz6p" [577d6ba2-e17a-426f-8315-1688766fa435] Running
	I0728 18:46:36.461152    4673 system_pods.go:61] "kube-proxy-tz5h5" [f791f783-464c-485b-9eda-97a5f857cca4] Running
	I0728 18:46:36.461154    4673 system_pods.go:61] "kube-scheduler-multinode-362000" [0299d0c0-d45d-45ee-9b8e-b5900e92694b] Running
	I0728 18:46:36.461158    4673 system_pods.go:61] "storage-provisioner" [9032906f-5102-4224-b894-d541cf7d67e7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0728 18:46:36.461163    4673 system_pods.go:74] duration metric: took 184.408643ms to wait for pod list to return data ...
	I0728 18:46:36.461195    4673 default_sa.go:34] waiting for default service account to be created ...
	I0728 18:46:36.655695    4673 request.go:629] Waited for 194.407341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0728 18:46:36.655774    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0728 18:46:36.655782    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:36.655792    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:36.655799    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:36.658351    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:36.658365    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:36.658372    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:36.658377    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:36.658380    4673 round_trippers.go:580]     Content-Length: 262
	I0728 18:46:36.658392    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:36 GMT
	I0728 18:46:36.658395    4673 round_trippers.go:580]     Audit-Id: e30dfd57-31ba-4f5b-b764-5ca09573e21c
	I0728 18:46:36.658400    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:36.658404    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:36.658417    4673 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1008"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"379c0dca-8465-4249-afbe-a226c72714a6","resourceVersion":"334","creationTimestamp":"2024-07-29T01:40:10Z"}}]}
	I0728 18:46:36.658589    4673 default_sa.go:45] found service account: "default"
	I0728 18:46:36.658602    4673 default_sa.go:55] duration metric: took 197.402552ms for default service account to be created ...
	I0728 18:46:36.658609    4673 system_pods.go:116] waiting for k8s-apps to be running ...
	I0728 18:46:36.855067    4673 request.go:629] Waited for 196.404299ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:46:36.855222    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:46:36.855233    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:36.855254    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:36.855264    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:36.858883    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:36.858899    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:36.858909    4673 round_trippers.go:580]     Audit-Id: a700e568-5bf5-4e76-b117-bcb58a728fa3
	I0728 18:46:36.858917    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:36.858923    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:36.858929    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:36.858933    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:36.858938    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:37 GMT
	I0728 18:46:36.860402    4673 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1008"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"1001","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86400 chars]
	I0728 18:46:36.862307    4673 system_pods.go:86] 12 kube-system pods found
	I0728 18:46:36.862318    4673 system_pods.go:89] "coredns-7db6d8ff4d-8npcw" [a0fcbb6f-1182-4d9e-bc04-456f1b4de1db] Running
	I0728 18:46:36.862323    4673 system_pods.go:89] "etcd-multinode-362000" [7b75e781-36f1-4f6f-99a4-808974571bcd] Running
	I0728 18:46:36.862326    4673 system_pods.go:89] "kindnet-4mw5v" [053773ee-043a-48e0-9f70-411430b19acd] Running
	I0728 18:46:36.862330    4673 system_pods.go:89] "kindnet-5dhhf" [e124802a-dbb6-4100-8c49-8a75ea05217a] Running
	I0728 18:46:36.862334    4673 system_pods.go:89] "kindnet-8hhwv" [487e32b7-7175-4187-89ba-90bb4d597681] Running
	I0728 18:46:36.862337    4673 system_pods.go:89] "kube-apiserver-multinode-362000" [95b0fc9b-aad1-47ad-ae00-439b4e4b905a] Running
	I0728 18:46:36.862340    4673 system_pods.go:89] "kube-controller-manager-multinode-362000" [5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9] Running
	I0728 18:46:36.862347    4673 system_pods.go:89] "kube-proxy-7gm24" [9db42267-b01f-40a3-bf21-c4d8cf6fb372] Running
	I0728 18:46:36.862351    4673 system_pods.go:89] "kube-proxy-dzz6p" [577d6ba2-e17a-426f-8315-1688766fa435] Running
	I0728 18:46:36.862354    4673 system_pods.go:89] "kube-proxy-tz5h5" [f791f783-464c-485b-9eda-97a5f857cca4] Running
	I0728 18:46:36.862358    4673 system_pods.go:89] "kube-scheduler-multinode-362000" [0299d0c0-d45d-45ee-9b8e-b5900e92694b] Running
	I0728 18:46:36.862363    4673 system_pods.go:89] "storage-provisioner" [9032906f-5102-4224-b894-d541cf7d67e7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0728 18:46:36.862368    4673 system_pods.go:126] duration metric: took 203.756211ms to wait for k8s-apps to be running ...
	I0728 18:46:36.862373    4673 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 18:46:36.862422    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:46:36.874191    4673 system_svc.go:56] duration metric: took 11.813962ms WaitForService to wait for kubelet
	I0728 18:46:36.874209    4673 kubeadm.go:582] duration metric: took 31.61010905s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:46:36.874221    4673 node_conditions.go:102] verifying NodePressure condition ...
	I0728 18:46:37.055720    4673 request.go:629] Waited for 181.451407ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0728 18:46:37.055857    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0728 18:46:37.055868    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:37.055876    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:37.055884    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:37.058412    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:37.058425    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:37.058433    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:37.058437    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:37.058440    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:37 GMT
	I0728 18:46:37.058444    4673 round_trippers.go:580]     Audit-Id: cb9bf5f0-9a22-4094-8dd3-972ad61b1792
	I0728 18:46:37.058448    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:37.058451    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:37.058945    4673 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1008"},"items":[{"metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 14177 chars]
	I0728 18:46:37.059487    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:46:37.059500    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:46:37.059510    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:46:37.059518    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:46:37.059532    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:46:37.059536    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:46:37.059541    4673 node_conditions.go:105] duration metric: took 185.312345ms to run NodePressure ...
	I0728 18:46:37.059551    4673 start.go:241] waiting for startup goroutines ...
	I0728 18:46:37.059559    4673 start.go:246] waiting for cluster config update ...
	I0728 18:46:37.059573    4673 start.go:255] writing updated cluster config ...
	I0728 18:46:37.080324    4673 out.go:177] 
	I0728 18:46:37.102477    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:37.102625    4673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:46:37.126122    4673 out.go:177] * Starting "multinode-362000-m02" worker node in "multinode-362000" cluster
	I0728 18:46:37.169063    4673 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:46:37.169099    4673 cache.go:56] Caching tarball of preloaded images
	I0728 18:46:37.169314    4673 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 18:46:37.169335    4673 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:46:37.169472    4673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:46:37.170711    4673 start.go:360] acquireMachinesLock for multinode-362000-m02: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:37.170834    4673 start.go:364] duration metric: took 97.592µs to acquireMachinesLock for "multinode-362000-m02"
	I0728 18:46:37.170860    4673 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:46:37.170868    4673 fix.go:54] fixHost starting: m02
	I0728 18:46:37.171310    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:46:37.171338    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:46:37.180385    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52877
	I0728 18:46:37.180766    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:46:37.181099    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:46:37.181110    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:46:37.181327    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:46:37.181459    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:37.181557    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetState
	I0728 18:46:37.181637    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:46:37.181723    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:46:37.182624    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid 4486 missing from process table
	I0728 18:46:37.182663    4673 fix.go:112] recreateIfNeeded on multinode-362000-m02: state=Stopped err=<nil>
	I0728 18:46:37.182699    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	W0728 18:46:37.182776    4673 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:46:37.203928    4673 out.go:177] * Restarting existing hyperkit VM for "multinode-362000-m02" ...
	I0728 18:46:37.245921    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .Start
	I0728 18:46:37.246363    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:46:37.246420    4673 main.go:141] libmachine: (multinode-362000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/hyperkit.pid
	I0728 18:46:37.248123    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid 4486 missing from process table
	I0728 18:46:37.248141    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | pid 4486 is in state "Stopped"
	I0728 18:46:37.248164    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/hyperkit.pid...
	I0728 18:46:37.248742    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | Using UUID 803737f6-60f1-4d1a-bdda-22c83e05ebd1
	I0728 18:46:37.275290    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | Generated MAC 6:55:c7:17:95:12
	I0728 18:46:37.275312    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000
	I0728 18:46:37.275454    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"803737f6-60f1-4d1a-bdda-22c83e05ebd1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000405350)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0728 18:46:37.275488    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"803737f6-60f1-4d1a-bdda-22c83e05ebd1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000405350)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0728 18:46:37.275537    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "803737f6-60f1-4d1a-bdda-22c83e05ebd1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/multinode-362000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/bzimage,/Users/j
enkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"}
	I0728 18:46:37.275574    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 803737f6-60f1-4d1a-bdda-22c83e05ebd1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/multinode-362000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/mult
inode-362000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"
	I0728 18:46:37.275583    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 18:46:37.277050    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 DEBUG: hyperkit: Pid is 4695
	I0728 18:46:37.277444    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | Attempt 0
	I0728 18:46:37.277479    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:46:37.278136    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4695
	I0728 18:46:37.279153    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | Searching for 6:55:c7:17:95:12 in /var/db/dhcpd_leases ...
	I0728 18:46:37.279247    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0728 18:46:37.279263    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a845cb}
	I0728 18:46:37.279287    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a6f430}
	I0728 18:46:37.279301    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a84496}
	I0728 18:46:37.279315    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | Found match: 6:55:c7:17:95:12
	I0728 18:46:37.279327    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | IP: 192.169.0.14
	I0728 18:46:37.279358    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetConfigRaw
	I0728 18:46:37.280046    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:46:37.280241    4673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:46:37.280726    4673 machine.go:94] provisionDockerMachine start ...
	I0728 18:46:37.280738    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:37.280865    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:37.280969    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:37.281063    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:37.281149    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:37.281225    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:37.281387    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:46:37.281571    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:46:37.281579    4673 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 18:46:37.285163    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 18:46:37.293106    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 18:46:37.294195    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:46:37.294211    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:46:37.294219    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:46:37.294227    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:46:37.678909    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 18:46:37.678928    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 18:46:37.793676    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:46:37.793707    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:46:37.793717    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:46:37.793731    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:46:37.794515    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 18:46:37.794524    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 18:46:43.388045    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:43 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0728 18:46:43.388113    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:43 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0728 18:46:43.388123    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:43 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0728 18:46:43.411630    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:43 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0728 18:46:48.338747    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0728 18:46:48.338763    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetMachineName
	I0728 18:46:48.338902    4673 buildroot.go:166] provisioning hostname "multinode-362000-m02"
	I0728 18:46:48.338914    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetMachineName
	I0728 18:46:48.339003    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:48.339080    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:48.339173    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.339249    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.339327    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:48.339462    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:46:48.339605    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:46:48.339614    4673 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-362000-m02 && echo "multinode-362000-m02" | sudo tee /etc/hostname
	I0728 18:46:48.399738    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-362000-m02
	
	I0728 18:46:48.399753    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:48.399878    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:48.399983    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.400072    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.400176    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:48.400303    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:46:48.400441    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:46:48.400452    4673 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-362000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-362000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-362000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 18:46:48.454950    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:46:48.454974    4673 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 18:46:48.454993    4673 buildroot.go:174] setting up certificates
	I0728 18:46:48.454999    4673 provision.go:84] configureAuth start
	I0728 18:46:48.455006    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetMachineName
	I0728 18:46:48.455155    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:46:48.455258    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:48.455356    4673 provision.go:143] copyHostCerts
	I0728 18:46:48.455387    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:46:48.455451    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 18:46:48.455457    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:46:48.455838    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 18:46:48.456074    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:46:48.456115    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 18:46:48.456120    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:46:48.456222    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 18:46:48.456370    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:46:48.456412    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 18:46:48.456417    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:46:48.456517    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 18:46:48.456687    4673 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.multinode-362000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-362000-m02]
	I0728 18:46:48.562747    4673 provision.go:177] copyRemoteCerts
	I0728 18:46:48.562797    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 18:46:48.562812    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:48.562955    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:48.563073    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.563160    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:48.563248    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:46:48.594219    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 18:46:48.594286    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 18:46:48.613653    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 18:46:48.613720    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0728 18:46:48.633022    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 18:46:48.633087    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 18:46:48.652312    4673 provision.go:87] duration metric: took 197.30092ms to configureAuth
	I0728 18:46:48.652326    4673 buildroot.go:189] setting minikube options for container-runtime
	I0728 18:46:48.652490    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:48.652518    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:48.652647    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:48.652719    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:48.652809    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.652902    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.652987    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:48.653090    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:46:48.653211    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:46:48.653218    4673 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 18:46:48.701718    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 18:46:48.701730    4673 buildroot.go:70] root file system type: tmpfs
	I0728 18:46:48.701803    4673 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 18:46:48.701814    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:48.701938    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:48.702016    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.702108    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.702184    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:48.702318    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:46:48.702459    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:46:48.702507    4673 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 18:46:48.760488    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 18:46:48.760507    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:48.760654    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:48.760771    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.760872    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.760982    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:48.761116    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:46:48.761257    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:46:48.761270    4673 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 18:46:50.332441    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0728 18:46:50.332463    4673 machine.go:97] duration metric: took 13.051821636s to provisionDockerMachine
	I0728 18:46:50.332471    4673 start.go:293] postStartSetup for "multinode-362000-m02" (driver="hyperkit")
	I0728 18:46:50.332495    4673 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 18:46:50.332510    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:50.332723    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 18:46:50.332735    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:50.332845    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:50.332941    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:50.333040    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:50.333118    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:46:50.368917    4673 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 18:46:50.372602    4673 command_runner.go:130] > NAME=Buildroot
	I0728 18:46:50.372611    4673 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0728 18:46:50.372615    4673 command_runner.go:130] > ID=buildroot
	I0728 18:46:50.372619    4673 command_runner.go:130] > VERSION_ID=2023.02.9
	I0728 18:46:50.372623    4673 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0728 18:46:50.372712    4673 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 18:46:50.372720    4673 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 18:46:50.372817    4673 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 18:46:50.373004    4673 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 18:46:50.373010    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /etc/ssl/certs/15332.pem
	I0728 18:46:50.373216    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 18:46:50.385453    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:46:50.412967    4673 start.go:296] duration metric: took 80.473695ms for postStartSetup
	I0728 18:46:50.412990    4673 fix.go:56] duration metric: took 13.242218481s for fixHost
	I0728 18:46:50.413012    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:50.413158    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:50.413245    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:50.413340    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:50.413423    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:50.413545    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:46:50.413686    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:46:50.413694    4673 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0728 18:46:50.463985    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722217610.598970634
	
	I0728 18:46:50.463996    4673 fix.go:216] guest clock: 1722217610.598970634
	I0728 18:46:50.464002    4673 fix.go:229] Guest: 2024-07-28 18:46:50.598970634 -0700 PDT Remote: 2024-07-28 18:46:50.412997 -0700 PDT m=+72.030483613 (delta=185.973634ms)
	I0728 18:46:50.464012    4673 fix.go:200] guest clock delta is within tolerance: 185.973634ms
	I0728 18:46:50.464016    4673 start.go:83] releasing machines lock for "multinode-362000-m02", held for 13.293267871s
	I0728 18:46:50.464033    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:50.464157    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:46:50.484636    4673 out.go:177] * Found network options:
	I0728 18:46:50.505437    4673 out.go:177]   - NO_PROXY=192.169.0.13
	W0728 18:46:50.527352    4673 proxy.go:119] fail to check proxy env: Error ip not in block
	I0728 18:46:50.527391    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:50.528310    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:50.528592    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:50.528715    4673 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 18:46:50.528756    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	W0728 18:46:50.528835    4673 proxy.go:119] fail to check proxy env: Error ip not in block
	I0728 18:46:50.528942    4673 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0728 18:46:50.528963    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:50.528960    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:50.529192    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:50.529230    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:50.529340    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:50.529373    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:50.529487    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:46:50.529516    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:50.529633    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:46:50.556694    4673 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0728 18:46:50.556800    4673 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 18:46:50.556861    4673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 18:46:50.606414    4673 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0728 18:46:50.606446    4673 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0728 18:46:50.606457    4673 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 18:46:50.606466    4673 start.go:495] detecting cgroup driver to use...
	I0728 18:46:50.606561    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:46:50.621864    4673 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0728 18:46:50.622119    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 18:46:50.631126    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 18:46:50.640070    4673 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 18:46:50.640119    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 18:46:50.648931    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:46:50.657813    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 18:46:50.666736    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:46:50.675962    4673 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 18:46:50.685166    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 18:46:50.694029    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 18:46:50.702688    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 18:46:50.711634    4673 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 18:46:50.719728    4673 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0728 18:46:50.719881    4673 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 18:46:50.727966    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:46:50.824868    4673 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 18:46:50.842133    4673 start.go:495] detecting cgroup driver to use...
	I0728 18:46:50.842204    4673 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 18:46:50.856570    4673 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0728 18:46:50.856706    4673 command_runner.go:130] > [Unit]
	I0728 18:46:50.856714    4673 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 18:46:50.856718    4673 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 18:46:50.856729    4673 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0728 18:46:50.856734    4673 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0728 18:46:50.856738    4673 command_runner.go:130] > StartLimitBurst=3
	I0728 18:46:50.856742    4673 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 18:46:50.856746    4673 command_runner.go:130] > [Service]
	I0728 18:46:50.856749    4673 command_runner.go:130] > Type=notify
	I0728 18:46:50.856756    4673 command_runner.go:130] > Restart=on-failure
	I0728 18:46:50.856760    4673 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0728 18:46:50.856767    4673 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 18:46:50.856773    4673 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 18:46:50.856779    4673 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 18:46:50.856785    4673 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 18:46:50.856791    4673 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 18:46:50.856797    4673 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 18:46:50.856802    4673 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 18:46:50.856812    4673 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 18:46:50.856819    4673 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 18:46:50.856824    4673 command_runner.go:130] > ExecStart=
	I0728 18:46:50.856838    4673 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0728 18:46:50.856843    4673 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 18:46:50.856853    4673 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 18:46:50.856860    4673 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 18:46:50.856863    4673 command_runner.go:130] > LimitNOFILE=infinity
	I0728 18:46:50.856866    4673 command_runner.go:130] > LimitNPROC=infinity
	I0728 18:46:50.856870    4673 command_runner.go:130] > LimitCORE=infinity
	I0728 18:46:50.856875    4673 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 18:46:50.856879    4673 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 18:46:50.856882    4673 command_runner.go:130] > TasksMax=infinity
	I0728 18:46:50.856886    4673 command_runner.go:130] > TimeoutStartSec=0
	I0728 18:46:50.856894    4673 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 18:46:50.856897    4673 command_runner.go:130] > Delegate=yes
	I0728 18:46:50.856902    4673 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 18:46:50.856910    4673 command_runner.go:130] > KillMode=process
	I0728 18:46:50.856915    4673 command_runner.go:130] > [Install]
	I0728 18:46:50.856918    4673 command_runner.go:130] > WantedBy=multi-user.target
	I0728 18:46:50.857020    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:46:50.871266    4673 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 18:46:50.888814    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:46:50.899257    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:46:50.909517    4673 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 18:46:50.928866    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:46:50.940308    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:46:50.954963    4673 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 18:46:50.955343    4673 ssh_runner.go:195] Run: which cri-dockerd
	I0728 18:46:50.958224    4673 command_runner.go:130] > /usr/bin/cri-dockerd
	I0728 18:46:50.958383    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 18:46:50.965826    4673 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 18:46:50.979903    4673 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 18:46:51.080819    4673 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 18:46:51.185908    4673 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 18:46:51.185935    4673 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 18:46:51.199686    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:46:51.301774    4673 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:46:53.591374    4673 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.289598801s)
	I0728 18:46:53.591423    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0728 18:46:53.602727    4673 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0728 18:46:53.616558    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:46:53.627458    4673 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0728 18:46:53.721063    4673 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 18:46:53.827566    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:46:53.938284    4673 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0728 18:46:53.952100    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:46:53.963267    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:46:54.078472    4673 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0728 18:46:54.137534    4673 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 18:46:54.137615    4673 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 18:46:54.141915    4673 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0728 18:46:54.141930    4673 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0728 18:46:54.141935    4673 command_runner.go:130] > Device: 0,22	Inode: 745         Links: 1
	I0728 18:46:54.141940    4673 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0728 18:46:54.141944    4673 command_runner.go:130] > Access: 2024-07-29 01:46:54.227951753 +0000
	I0728 18:46:54.141955    4673 command_runner.go:130] > Modify: 2024-07-29 01:46:54.227951753 +0000
	I0728 18:46:54.141959    4673 command_runner.go:130] > Change: 2024-07-29 01:46:54.228951679 +0000
	I0728 18:46:54.141966    4673 command_runner.go:130] >  Birth: -
	I0728 18:46:54.141993    4673 start.go:563] Will wait 60s for crictl version
	I0728 18:46:54.142047    4673 ssh_runner.go:195] Run: which crictl
	I0728 18:46:54.144853    4673 command_runner.go:130] > /usr/bin/crictl
	I0728 18:46:54.144959    4673 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0728 18:46:54.171431    4673 command_runner.go:130] > Version:  0.1.0
	I0728 18:46:54.171446    4673 command_runner.go:130] > RuntimeName:  docker
	I0728 18:46:54.171450    4673 command_runner.go:130] > RuntimeVersion:  27.1.0
	I0728 18:46:54.171454    4673 command_runner.go:130] > RuntimeApiVersion:  v1
	I0728 18:46:54.172503    4673 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0728 18:46:54.172577    4673 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:46:54.191400    4673 command_runner.go:130] > 27.1.0
	I0728 18:46:54.192567    4673 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:46:54.209519    4673 command_runner.go:130] > 27.1.0
	I0728 18:46:54.231668    4673 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0728 18:46:54.273335    4673 out.go:177]   - env NO_PROXY=192.169.0.13
	I0728 18:46:54.294441    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:46:54.294833    4673 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0728 18:46:54.298880    4673 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:46:54.308145    4673 mustload.go:65] Loading cluster: multinode-362000
	I0728 18:46:54.308318    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:54.308552    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:46:54.308568    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:46:54.317259    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52898
	I0728 18:46:54.317604    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:46:54.317948    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:46:54.317964    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:46:54.318184    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:46:54.318294    4673 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:46:54.318377    4673 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:46:54.318467    4673 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4686
	I0728 18:46:54.319549    4673 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:46:54.319799    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:46:54.319816    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:46:54.328302    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52900
	I0728 18:46:54.328634    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:46:54.328960    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:46:54.328972    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:46:54.329182    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:46:54.329302    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:46:54.329393    4673 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000 for IP: 192.169.0.14
	I0728 18:46:54.329399    4673 certs.go:194] generating shared ca certs ...
	I0728 18:46:54.329411    4673 certs.go:226] acquiring lock for ca certs: {Name:mk64aac07da96a39ae6165406ad142fbce2d0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:46:54.329592    4673 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key
	I0728 18:46:54.329666    4673 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key
	I0728 18:46:54.329677    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0728 18:46:54.329700    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0728 18:46:54.329720    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0728 18:46:54.329738    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0728 18:46:54.329829    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem (1338 bytes)
	W0728 18:46:54.329879    4673 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533_empty.pem, impossibly tiny 0 bytes
	I0728 18:46:54.329889    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem (1675 bytes)
	I0728 18:46:54.329927    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem (1078 bytes)
	I0728 18:46:54.329958    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem (1123 bytes)
	I0728 18:46:54.329986    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem (1679 bytes)
	I0728 18:46:54.330048    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:46:54.330086    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem -> /usr/share/ca-certificates/1533.pem
	I0728 18:46:54.330106    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /usr/share/ca-certificates/15332.pem
	I0728 18:46:54.330129    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:46:54.330155    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 18:46:54.350393    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 18:46:54.370269    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 18:46:54.389580    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 18:46:54.408538    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem --> /usr/share/ca-certificates/1533.pem (1338 bytes)
	I0728 18:46:54.427454    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /usr/share/ca-certificates/15332.pem (1708 bytes)
	I0728 18:46:54.446481    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 18:46:54.465487    4673 ssh_runner.go:195] Run: openssl version
	I0728 18:46:54.469435    4673 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0728 18:46:54.469642    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1533.pem && ln -fs /usr/share/ca-certificates/1533.pem /etc/ssl/certs/1533.pem"
	I0728 18:46:54.478604    4673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1533.pem
	I0728 18:46:54.481736    4673 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 00:57 /usr/share/ca-certificates/1533.pem
	I0728 18:46:54.481923    4673 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:57 /usr/share/ca-certificates/1533.pem
	I0728 18:46:54.481961    4673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1533.pem
	I0728 18:46:54.485902    4673 command_runner.go:130] > 51391683
	I0728 18:46:54.486146    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1533.pem /etc/ssl/certs/51391683.0"
	I0728 18:46:54.495167    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15332.pem && ln -fs /usr/share/ca-certificates/15332.pem /etc/ssl/certs/15332.pem"
	I0728 18:46:54.504152    4673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15332.pem
	I0728 18:46:54.507315    4673 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 00:57 /usr/share/ca-certificates/15332.pem
	I0728 18:46:54.507464    4673 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:57 /usr/share/ca-certificates/15332.pem
	I0728 18:46:54.507502    4673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15332.pem
	I0728 18:46:54.511525    4673 command_runner.go:130] > 3ec20f2e
	I0728 18:46:54.511693    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15332.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 18:46:54.520673    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 18:46:54.529638    4673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:46:54.532773    4673 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:46:54.532893    4673 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:46:54.532924    4673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:46:54.536882    4673 command_runner.go:130] > b5213941
	I0728 18:46:54.537124    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 18:46:54.546070    4673 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0728 18:46:54.548944    4673 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0728 18:46:54.549055    4673 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0728 18:46:54.549088    4673 kubeadm.go:934] updating node {m02 192.169.0.14 8443 v1.30.3 docker false true} ...
	I0728 18:46:54.549144    4673 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-362000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0728 18:46:54.549183    4673 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0728 18:46:54.557127    4673 command_runner.go:130] > kubeadm
	I0728 18:46:54.557135    4673 command_runner.go:130] > kubectl
	I0728 18:46:54.557140    4673 command_runner.go:130] > kubelet
	I0728 18:46:54.557199    4673 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 18:46:54.557243    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0728 18:46:54.565192    4673 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0728 18:46:54.578634    4673 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 18:46:54.592042    4673 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0728 18:46:54.594943    4673 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:46:54.604909    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:46:54.700553    4673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:46:54.715123    4673 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:46:54.715394    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:46:54.715413    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:46:54.724542    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52902
	I0728 18:46:54.724903    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:46:54.725281    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:46:54.725304    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:46:54.725544    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:46:54.725668    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:46:54.725766    4673 start.go:317] joinCluster: &{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:46:54.725874    4673 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 18:46:54.725896    4673 host.go:66] Checking if "multinode-362000-m02" exists ...
	I0728 18:46:54.726163    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:46:54.726182    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:46:54.735248    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52904
	I0728 18:46:54.735604    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:46:54.735957    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:46:54.735976    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:46:54.736202    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:46:54.736322    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:54.736409    4673 mustload.go:65] Loading cluster: multinode-362000
	I0728 18:46:54.736584    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:54.736803    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:46:54.736822    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:46:54.745568    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52906
	I0728 18:46:54.745922    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:46:54.746253    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:46:54.746263    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:46:54.746483    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:46:54.746596    4673 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:46:54.746678    4673 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:46:54.746758    4673 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4686
	I0728 18:46:54.747695    4673 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:46:54.747964    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:46:54.747981    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:46:54.756703    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52908
	I0728 18:46:54.757040    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:46:54.757355    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:46:54.757366    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:46:54.757565    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:46:54.757681    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:46:54.757774    4673 api_server.go:166] Checking apiserver status ...
	I0728 18:46:54.757820    4673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:46:54.757831    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:46:54.757905    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:46:54.758008    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:46:54.758106    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:46:54.758198    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:46:54.806335    4673 command_runner.go:130] > 1742
	I0728 18:46:54.806444    4673 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1742/cgroup
	W0728 18:46:54.815059    4673 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1742/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0728 18:46:54.815113    4673 ssh_runner.go:195] Run: ls
	I0728 18:46:54.818354    4673 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:46:54.821324    4673 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0728 18:46:54.821371    4673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl drain multinode-362000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0728 18:46:54.908684    4673 command_runner.go:130] > node/multinode-362000-m02 cordoned
	I0728 18:46:57.929209    4673 command_runner.go:130] > pod "busybox-fc5497c4f-svnlx" has DeletionTimestamp older than 1 seconds, skipping
	I0728 18:46:57.929285    4673 command_runner.go:130] > node/multinode-362000-m02 drained
	I0728 18:46:57.930945    4673 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-8hhwv, kube-system/kube-proxy-dzz6p
	I0728 18:46:57.931070    4673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl drain multinode-362000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.109696864s)
	I0728 18:46:57.931083    4673 node.go:128] successfully drained node "multinode-362000-m02"
	I0728 18:46:57.931113    4673 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0728 18:46:57.931135    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:57.931291    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:57.931385    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:57.931475    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:57.931571    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:46:58.018108    4673 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 18:46:58.018262    4673 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0728 18:46:58.018301    4673 command_runner.go:130] > [reset] Stopping the kubelet service
	I0728 18:46:58.024499    4673 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0728 18:46:58.235360    4673 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0728 18:46:58.236942    4673 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0728 18:46:58.237014    4673 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0728 18:46:58.237025    4673 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0728 18:46:58.237031    4673 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0728 18:46:58.237036    4673 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0728 18:46:58.237041    4673 command_runner.go:130] > to reset your system's IPVS tables.
	I0728 18:46:58.237053    4673 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0728 18:46:58.237070    4673 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0728 18:46:58.237833    4673 command_runner.go:130] ! W0729 01:46:58.158346    1317 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0728 18:46:58.237859    4673 command_runner.go:130] ! W0729 01:46:58.375481    1317 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod ccdd12c4acff53ab3d996d68ff20e1434ae4b03bba8407120e64a2b4a503be78: output: E0729 01:46:58.285481    1347 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-svnlx_default\" network: cni config uninitialized" podSandboxID="ccdd12c4acff53ab3d996d68ff20e1434ae4b03bba8407120e64a2b4a503be78"
	I0728 18:46:58.237872    4673 command_runner.go:130] ! time="2024-07-29T01:46:58Z" level=fatal msg="stopping the pod sandbox \"ccdd12c4acff53ab3d996d68ff20e1434ae4b03bba8407120e64a2b4a503be78\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-svnlx_default\" network: cni config uninitialized"
	I0728 18:46:58.237876    4673 command_runner.go:130] ! : exit status 1
	I0728 18:46:58.237886    4673 node.go:155] successfully reset node "multinode-362000-m02"
	I0728 18:46:58.238162    4673 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:46:58.238385    4673 kapi.go:59] client config for multinode-362000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10bd5b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:46:58.238654    4673 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0728 18:46:58.238686    4673 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:46:58.238690    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:58.238695    4673 round_trippers.go:473]     Content-Type: application/json
	I0728 18:46:58.238699    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:58.238702    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:58.241342    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:58.241352    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:58.241357    4673 round_trippers.go:580]     Audit-Id: c133346d-1d9d-41d5-9bb8-01a0c040940d
	I0728 18:46:58.241367    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:58.241370    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:58.241373    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:58.241377    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:58.241381    4673 round_trippers.go:580]     Content-Length: 171
	I0728 18:46:58.241389    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:58 GMT
	I0728 18:46:58.241400    4673 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-362000-m02","kind":"nodes","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90"}}
	I0728 18:46:58.241417    4673 node.go:180] successfully deleted node "multinode-362000-m02"
	I0728 18:46:58.241424    4673 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 18:46:58.241442    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0728 18:46:58.241456    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:46:58.241610    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:46:58.241710    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:46:58.241832    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:46:58.241924    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:46:58.340820    4673 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dhteq6.jo67xl499g7wortn --discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 
	I0728 18:46:58.340863    4673 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 18:46:58.340881    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dhteq6.jo67xl499g7wortn --discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-362000-m02"
	I0728 18:46:58.456507    4673 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0728 18:46:59.123671    4673 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 18:46:59.123686    4673 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0728 18:46:59.123694    4673 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0728 18:46:59.123712    4673 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0728 18:46:59.123723    4673 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0728 18:46:59.123728    4673 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0728 18:46:59.123736    4673 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0728 18:46:59.123741    4673 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.380082ms
	I0728 18:46:59.123745    4673 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0728 18:46:59.123749    4673 command_runner.go:130] > This node has joined the cluster:
	I0728 18:46:59.123755    4673 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0728 18:46:59.123760    4673 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0728 18:46:59.123765    4673 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0728 18:46:59.123788    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0728 18:46:59.332343    4673 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0728 18:46:59.332487    4673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-362000-m02 minikube.k8s.io/updated_at=2024_07_28T18_46_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=multinode-362000 minikube.k8s.io/primary=false
	I0728 18:46:59.404418    4673 command_runner.go:130] > node/multinode-362000-m02 labeled
	I0728 18:46:59.405437    4673 start.go:319] duration metric: took 4.679707009s to joinCluster
	I0728 18:46:59.405479    4673 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 18:46:59.405665    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:59.427713    4673 out.go:177] * Verifying Kubernetes components...
	I0728 18:46:59.469772    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:46:59.570321    4673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:46:59.581522    4673 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:46:59.581718    4673 kapi.go:59] client config for multinode-362000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10bd5b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:46:59.581899    4673 node_ready.go:35] waiting up to 6m0s for node "multinode-362000-m02" to be "Ready" ...
	I0728 18:46:59.581939    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:46:59.581944    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:59.581949    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:59.581953    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:59.583579    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:59.583588    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:59.583593    4673 round_trippers.go:580]     Audit-Id: 0be9369c-d23d-44aa-aa15-d62e88617b5a
	I0728 18:46:59.583596    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:59.583600    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:59.583607    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:59.583610    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:59.583612    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:59 GMT
	I0728 18:46:59.583687    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:00.082619    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:00.082645    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:00.082662    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:00.082747    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:00.085440    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:00.085454    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:00.085460    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:00.085481    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:00.085494    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:00.085499    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:00.085510    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:00 GMT
	I0728 18:47:00.085516    4673 round_trippers.go:580]     Audit-Id: a92f9e39-ec6f-499d-b288-17ddfa0dce67
	I0728 18:47:00.085597    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:00.583529    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:00.583544    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:00.583597    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:00.583602    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:00.585299    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:00.585308    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:00.585312    4673 round_trippers.go:580]     Audit-Id: 6ce7c3f0-0595-484c-af8d-fe3c0974c93e
	I0728 18:47:00.585316    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:00.585319    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:00.585321    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:00.585324    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:00.585328    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:00 GMT
	I0728 18:47:00.585498    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:01.083299    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:01.083332    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:01.083414    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:01.083424    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:01.086308    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:01.086328    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:01.086338    4673 round_trippers.go:580]     Audit-Id: d1797d5e-4346-4fea-93c6-ea3b534ad6f7
	I0728 18:47:01.086345    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:01.086352    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:01.086358    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:01.086363    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:01.086369    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:01 GMT
	I0728 18:47:01.086463    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:01.583575    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:01.583593    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:01.583599    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:01.583601    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:01.585325    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:01.585336    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:01.585342    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:01.585345    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:01.585348    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:01 GMT
	I0728 18:47:01.585351    4673 round_trippers.go:580]     Audit-Id: a1206081-4a79-4797-be1b-91493a445154
	I0728 18:47:01.585353    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:01.585355    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:01.585466    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:01.585645    4673 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:47:02.083477    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:02.083502    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:02.083592    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:02.083601    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:02.086148    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:02.086163    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:02.086173    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:02.086181    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:02.086188    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:02 GMT
	I0728 18:47:02.086194    4673 round_trippers.go:580]     Audit-Id: 8c627219-2e40-475e-b83f-266af5621abd
	I0728 18:47:02.086201    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:02.086205    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:02.086661    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:02.583535    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:02.583554    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:02.583562    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:02.583567    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:02.585924    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:02.585934    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:02.585940    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:02.585944    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:02 GMT
	I0728 18:47:02.585947    4673 round_trippers.go:580]     Audit-Id: 977c9a16-d746-4d94-8632-7a74cefa5500
	I0728 18:47:02.585949    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:02.585952    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:02.585954    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:02.586085    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:03.082097    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:03.082124    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:03.082135    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:03.082141    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:03.084163    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:03.084176    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:03.084183    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:03.084188    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:03.084194    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:03 GMT
	I0728 18:47:03.084199    4673 round_trippers.go:580]     Audit-Id: 678c63cb-f72b-49c1-8395-41fd933e6d38
	I0728 18:47:03.084205    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:03.084208    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:03.084280    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:03.583641    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:03.583683    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:03.583769    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:03.583779    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:03.586515    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:03.586531    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:03.586543    4673 round_trippers.go:580]     Audit-Id: 8c9f1775-db72-4369-856a-00cef1bc50ba
	I0728 18:47:03.586546    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:03.586551    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:03.586555    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:03.586559    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:03.586564    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:03 GMT
	I0728 18:47:03.586633    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:03.586842    4673 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:47:04.083562    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:04.083586    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:04.083657    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:04.083667    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:04.086339    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:04.086352    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:04.086370    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:04.086375    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:04.086378    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:04 GMT
	I0728 18:47:04.086382    4673 round_trippers.go:580]     Audit-Id: 5937a6af-6f15-4fea-8e5c-a4d3de23ff73
	I0728 18:47:04.086385    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:04.086388    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:04.086760    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:04.583503    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:04.583526    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:04.583536    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:04.583543    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:04.586213    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:04.586226    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:04.586234    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:04.586242    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:04.586247    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:04.586252    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:04.586258    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:04 GMT
	I0728 18:47:04.586265    4673 round_trippers.go:580]     Audit-Id: 5e772194-7b0c-4238-ad13-1965058f1e80
	I0728 18:47:04.586509    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:05.083929    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:05.083955    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:05.084062    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:05.084075    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:05.086388    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:05.086399    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:05.086406    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:05 GMT
	I0728 18:47:05.086410    4673 round_trippers.go:580]     Audit-Id: 48ccfbdf-f1af-4a34-9739-ca888d40d18d
	I0728 18:47:05.086414    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:05.086418    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:05.086423    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:05.086427    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:05.086672    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:05.583613    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:05.583640    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:05.583681    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:05.583694    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:05.586237    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:05.586250    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:05.586257    4673 round_trippers.go:580]     Audit-Id: e7b11eac-026b-49e8-af17-fd8c3bed843a
	I0728 18:47:05.586261    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:05.586266    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:05.586271    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:05.586278    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:05.586285    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:05 GMT
	I0728 18:47:05.586500    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:06.082733    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:06.082760    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:06.082769    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:06.082772    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:06.084557    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:06.084565    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:06.084570    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:06 GMT
	I0728 18:47:06.084573    4673 round_trippers.go:580]     Audit-Id: fc198dac-3f3d-4556-91d3-121b753a1ba0
	I0728 18:47:06.084576    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:06.084580    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:06.084584    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:06.084587    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:06.084754    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:06.084918    4673 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:47:06.583526    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:06.583559    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:06.583573    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:06.583579    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:06.586174    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:06.586195    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:06.586203    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:06.586208    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:06 GMT
	I0728 18:47:06.586213    4673 round_trippers.go:580]     Audit-Id: 19b104e3-3cb4-493d-9ca1-79028198dcff
	I0728 18:47:06.586217    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:06.586230    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:06.586235    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:06.586313    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:07.082627    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:07.082653    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:07.082664    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:07.082671    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:07.085339    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:07.085354    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:07.085360    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:07.085364    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:07.085368    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:07 GMT
	I0728 18:47:07.085376    4673 round_trippers.go:580]     Audit-Id: a236a5a6-2ec6-4ada-a8c6-15c1e07ab613
	I0728 18:47:07.085380    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:07.085386    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:07.085456    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:07.583561    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:07.583588    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:07.583600    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:07.583606    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:07.586272    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:07.586292    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:07.586299    4673 round_trippers.go:580]     Audit-Id: 3ea443e1-c9f4-4c9c-ac6b-d6bcc8ce04cd
	I0728 18:47:07.586304    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:07.586310    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:07.586314    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:07.586318    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:07.586321    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:07 GMT
	I0728 18:47:07.586385    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:08.082842    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:08.082867    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:08.082877    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:08.082882    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:08.085218    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:08.085230    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:08.085237    4673 round_trippers.go:580]     Audit-Id: b52fb803-933c-4628-affa-c6866ccbd1da
	I0728 18:47:08.085251    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:08.085258    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:08.085264    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:08.085269    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:08.085276    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:08 GMT
	I0728 18:47:08.085461    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:08.085674    4673 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:47:08.583592    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:08.583610    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:08.583617    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:08.583622    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:08.585747    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:08.585757    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:08.585770    4673 round_trippers.go:580]     Audit-Id: 05ed2d15-b1ed-43c1-a795-249105341cb1
	I0728 18:47:08.585775    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:08.585781    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:08.585784    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:08.585787    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:08.585790    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:08 GMT
	I0728 18:47:08.586064    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:09.082069    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:09.082095    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:09.082107    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:09.082122    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:09.084654    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:09.084667    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:09.084674    4673 round_trippers.go:580]     Audit-Id: 38a2e509-bfae-440c-a13d-9b0670664c44
	I0728 18:47:09.084682    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:09.084687    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:09.084693    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:09.084700    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:09.084706    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:09 GMT
	I0728 18:47:09.084772    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:09.583552    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:09.583581    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:09.583594    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:09.583681    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:09.586191    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:09.586206    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:09.586213    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:09.586217    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:09 GMT
	I0728 18:47:09.586222    4673 round_trippers.go:580]     Audit-Id: 244e93c2-0ae9-43df-a5c8-07133b904a24
	I0728 18:47:09.586255    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:09.586264    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:09.586274    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:09.586367    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:10.083269    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:10.083370    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:10.083384    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:10.083391    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:10.085996    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:10.086015    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:10.086023    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:10.086028    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:10.086032    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:10 GMT
	I0728 18:47:10.086063    4673 round_trippers.go:580]     Audit-Id: d68a320d-bf05-4f48-a789-117a8e33b47b
	I0728 18:47:10.086074    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:10.086081    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:10.086229    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:10.086450    4673 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:47:10.583491    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:10.583518    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:10.583530    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:10.583536    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:10.586279    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:10.586292    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:10.586299    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:10.586304    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:10.586308    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:10.586311    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:10 GMT
	I0728 18:47:10.586316    4673 round_trippers.go:580]     Audit-Id: 1e516778-0609-427d-9b2d-94936c11d2b3
	I0728 18:47:10.586320    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:10.586718    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:11.083441    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:11.083460    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:11.083468    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:11.083471    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:11.085507    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:11.085521    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:11.085527    4673 round_trippers.go:580]     Audit-Id: bfbe1c79-2000-473a-8d45-9dd4cfa52187
	I0728 18:47:11.085535    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:11.085538    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:11.085540    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:11.085543    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:11.085546    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:11 GMT
	I0728 18:47:11.085653    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:11.583494    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:11.583522    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:11.583533    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:11.583539    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:11.586432    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:11.586447    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:11.586455    4673 round_trippers.go:580]     Audit-Id: ea4443b5-8768-4649-90e3-04c255fdd021
	I0728 18:47:11.586458    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:11.586462    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:11.586465    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:11.586470    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:11.586473    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:11 GMT
	I0728 18:47:11.586608    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:12.083737    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:12.083757    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:12.083798    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:12.083805    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:12.085657    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:12.085666    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:12.085683    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:12.085696    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:12 GMT
	I0728 18:47:12.085700    4673 round_trippers.go:580]     Audit-Id: 24ca3df1-827e-402f-9e1d-7153d754fe03
	I0728 18:47:12.085704    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:12.085707    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:12.085725    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:12.085836    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:12.583551    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:12.583585    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:12.583596    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:12.583602    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:12.586276    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:12.586292    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:12.586300    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:12.586306    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:12 GMT
	I0728 18:47:12.586309    4673 round_trippers.go:580]     Audit-Id: ac02a110-f61d-43a7-a2d9-2e8deb40894a
	I0728 18:47:12.586313    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:12.586317    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:12.586320    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:12.586393    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:12.586620    4673 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:47:13.083427    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:13.083462    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:13.083473    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:13.083481    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:13.086185    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:13.086200    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:13.086208    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:13.086214    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:13.086223    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:13 GMT
	I0728 18:47:13.086236    4673 round_trippers.go:580]     Audit-Id: 1ecb1c00-20b1-4739-b671-ef5f0f726f67
	I0728 18:47:13.086246    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:13.086259    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:13.086342    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:13.583552    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:13.583576    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:13.583588    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:13.583596    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:13.586285    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:13.586298    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:13.586305    4673 round_trippers.go:580]     Audit-Id: 63540feb-66f6-472f-ae28-ec3f5b163290
	I0728 18:47:13.586310    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:13.586313    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:13.586317    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:13.586321    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:13.586325    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:13 GMT
	I0728 18:47:13.586595    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:14.082829    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:14.082854    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.082866    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.082876    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.085357    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:14.085370    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.085377    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.085382    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.085387    4673 round_trippers.go:580]     Audit-Id: 07158bf9-c1df-41b3-875c-270749eaf52a
	I0728 18:47:14.085402    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.085409    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.085415    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.085729    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:14.583527    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:14.583550    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.583561    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.583567    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.586127    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:14.586138    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.586146    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.586151    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.586161    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.586165    4673 round_trippers.go:580]     Audit-Id: a1dd98f0-ae96-4083-82f9-ae54c771a321
	I0728 18:47:14.586169    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.586172    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.586777    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1113","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0728 18:47:14.587002    4673 node_ready.go:49] node "multinode-362000-m02" has status "Ready":"True"
	I0728 18:47:14.587013    4673 node_ready.go:38] duration metric: took 15.005213007s for node "multinode-362000-m02" to be "Ready" ...
	I0728 18:47:14.587020    4673 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:47:14.587063    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:47:14.587071    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.587078    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.587087    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.589399    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:14.589411    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.589417    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.589420    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.589437    4673 round_trippers.go:580]     Audit-Id: 27366e48-0fac-407c-8309-4d2b8e5d873e
	I0728 18:47:14.589444    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.589447    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.589450    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.590265    4673 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1115"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"1001","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86442 chars]
	I0728 18:47:14.592159    4673 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.592195    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:47:14.592200    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.592206    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.592210    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.593323    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:14.593331    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.593336    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.593341    4673 round_trippers.go:580]     Audit-Id: 40e8a029-e8e7-442f-a012-29763697b332
	I0728 18:47:14.593348    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.593352    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.593354    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.593359    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.593532    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"1001","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0728 18:47:14.593765    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:47:14.593771    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.593777    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.593780    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.594753    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:47:14.594762    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.594769    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.594774    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.594778    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.594782    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.594805    4673 round_trippers.go:580]     Audit-Id: 54313934-94d7-4b70-b561-5005190065d9
	I0728 18:47:14.594815    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.594983    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:47:14.595169    4673 pod_ready.go:92] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"True"
	I0728 18:47:14.595179    4673 pod_ready.go:81] duration metric: took 3.009773ms for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.595185    4673 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.595220    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-362000
	I0728 18:47:14.595225    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.595230    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.595235    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.596229    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:47:14.596236    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.596243    4673 round_trippers.go:580]     Audit-Id: 8bd93089-422c-4ac2-881d-32fff2f3827d
	I0728 18:47:14.596249    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.596253    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.596258    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.596262    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.596266    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.596373    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-362000","namespace":"kube-system","uid":"7b75e781-36f1-4f6f-99a4-808974571bcd","resourceVersion":"971","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.mirror":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.seen":"2024-07-29T01:39:56.230156002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6357 chars]
	I0728 18:47:14.596577    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:47:14.596583    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.596589    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.596591    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.597598    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:14.597606    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.597611    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.597620    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.597623    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.597626    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.597628    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.597632    4673 round_trippers.go:580]     Audit-Id: 3f664d18-6652-4750-97a6-c67ed0e633ee
	I0728 18:47:14.597727    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:47:14.597887    4673 pod_ready.go:92] pod "etcd-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:47:14.597896    4673 pod_ready.go:81] duration metric: took 2.707171ms for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.597906    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.597934    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-362000
	I0728 18:47:14.597941    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.597947    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.597952    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.598976    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:14.598984    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.598989    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.598992    4673 round_trippers.go:580]     Audit-Id: fbbbe501-e880-49f1-8f56-53581a7896c6
	I0728 18:47:14.598996    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.599000    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.599006    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.599009    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.599114    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-362000","namespace":"kube-system","uid":"95b0fc9b-aad1-47ad-ae00-439b4e4b905a","resourceVersion":"961","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.mirror":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.seen":"2024-07-29T01:39:56.230158669Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0728 18:47:14.599381    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:47:14.599388    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.599394    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.599397    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.600352    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:47:14.600359    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.600363    4673 round_trippers.go:580]     Audit-Id: 6cf14d7d-8f78-4b11-ad0c-ed366d6ea160
	I0728 18:47:14.600366    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.600369    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.600373    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.600379    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.600382    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.600485    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:47:14.600659    4673 pod_ready.go:92] pod "kube-apiserver-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:47:14.600667    4673 pod_ready.go:81] duration metric: took 2.755614ms for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.600673    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.600709    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-362000
	I0728 18:47:14.600714    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.600719    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.600721    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.601710    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:47:14.601717    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.601722    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.601725    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.601732    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.601737    4673 round_trippers.go:580]     Audit-Id: f3a3e678-dec2-4d3e-9d31-58710e541dbb
	I0728 18:47:14.601740    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.601742    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.601897    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-362000","namespace":"kube-system","uid":"5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9","resourceVersion":"969","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.mirror":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.seen":"2024-07-29T01:39:56.230159555Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0728 18:47:14.602126    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:47:14.602133    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.602139    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.602143    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.603211    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:14.603217    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.603221    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.603225    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.603227    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.603229    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.603231    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.603233    4673 round_trippers.go:580]     Audit-Id: 8ae08f39-b644-4897-b6aa-938523cee4a0
	I0728 18:47:14.603401    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:47:14.603568    4673 pod_ready.go:92] pod "kube-controller-manager-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:47:14.603576    4673 pod_ready.go:81] duration metric: took 2.898089ms for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.603581    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7gm24" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.783734    4673 request.go:629] Waited for 180.090282ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7gm24
	I0728 18:47:14.783891    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7gm24
	I0728 18:47:14.783901    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.783912    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.783922    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.786224    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:14.786238    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.786245    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.786251    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.786259    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.786265    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.786271    4673 round_trippers.go:580]     Audit-Id: 03b73c1d-0925-421d-be62-4e2d5cededf8
	I0728 18:47:14.786275    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.786383    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7gm24","generateName":"kube-proxy-","namespace":"kube-system","uid":"9db42267-b01f-40a3-bf21-c4d8cf6fb372","resourceVersion":"1030","creationTimestamp":"2024-07-29T01:44:55Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0728 18:47:14.985033    4673 request.go:629] Waited for 198.214721ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m03
	I0728 18:47:14.985099    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m03
	I0728 18:47:14.985110    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.985123    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.985129    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.987719    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:14.987734    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.987741    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.987745    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.987750    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.987754    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:15 GMT
	I0728 18:47:14.987757    4673 round_trippers.go:580]     Audit-Id: b70ff3a0-6e2b-45a6-9db5-c40b69093c47
	I0728 18:47:14.987760    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.987828    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m03","uid":"f2047331-d0da-470e-8da5-7b725a7d5c49","resourceVersion":"1102","creationTimestamp":"2024-07-29T01:44:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_44_56_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3899 chars]
	I0728 18:47:14.988052    4673 pod_ready.go:97] node "multinode-362000-m03" hosting pod "kube-proxy-7gm24" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000-m03" has status "Ready":"Unknown"
	I0728 18:47:14.988066    4673 pod_ready.go:81] duration metric: took 384.481958ms for pod "kube-proxy-7gm24" in "kube-system" namespace to be "Ready" ...
	E0728 18:47:14.988078    4673 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-362000-m03" hosting pod "kube-proxy-7gm24" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000-m03" has status "Ready":"Unknown"
	I0728 18:47:14.988084    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dzz6p" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:15.183547    4673 request.go:629] Waited for 195.401729ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzz6p
	I0728 18:47:15.183691    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzz6p
	I0728 18:47:15.183702    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:15.183713    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:15.183720    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:15.186493    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:15.186508    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:15.186516    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:15 GMT
	I0728 18:47:15.186520    4673 round_trippers.go:580]     Audit-Id: 4be157c9-28d3-49c3-be32-55e7eb564fe5
	I0728 18:47:15.186523    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:15.186527    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:15.186533    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:15.186536    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:15.186618    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dzz6p","generateName":"kube-proxy-","namespace":"kube-system","uid":"577d6ba2-e17a-426f-8315-1688766fa435","resourceVersion":"1089","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0728 18:47:15.383495    4673 request.go:629] Waited for 196.542391ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:15.383593    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:15.383598    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:15.383604    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:15.383609    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:15.385518    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:15.385529    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:15.385537    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:15.385543    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:15.385547    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:15.385551    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:15 GMT
	I0728 18:47:15.385554    4673 round_trippers.go:580]     Audit-Id: 2b4b94f0-b194-4608-b7a9-f754c84b1ca7
	I0728 18:47:15.385557    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:15.385632    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1113","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0728 18:47:15.385808    4673 pod_ready.go:92] pod "kube-proxy-dzz6p" in "kube-system" namespace has status "Ready":"True"
	I0728 18:47:15.385816    4673 pod_ready.go:81] duration metric: took 397.729489ms for pod "kube-proxy-dzz6p" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:15.385842    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:15.584019    4673 request.go:629] Waited for 198.118502ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:47:15.584187    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:47:15.584198    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:15.584209    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:15.584217    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:15.587046    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:15.587066    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:15.587077    4673 round_trippers.go:580]     Audit-Id: 75623fe3-4ec1-4c1e-aed7-e359acc02add
	I0728 18:47:15.587084    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:15.587091    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:15.587099    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:15.587105    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:15.587110    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:15 GMT
	I0728 18:47:15.587276    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tz5h5","generateName":"kube-proxy-","namespace":"kube-system","uid":"f791f783-464c-485b-9eda-97a5f857cca4","resourceVersion":"974","creationTimestamp":"2024-07-29T01:40:09Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0728 18:47:15.784628    4673 request.go:629] Waited for 196.977316ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:47:15.784787    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:47:15.784798    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:15.784814    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:15.784823    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:15.787228    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:15.787240    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:15.787247    4673 round_trippers.go:580]     Audit-Id: f881ee1b-7ab6-4aca-9d61-24a9a01a3e6b
	I0728 18:47:15.787251    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:15.787258    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:15.787262    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:15.787266    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:15.787268    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:15 GMT
	I0728 18:47:15.787418    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:47:15.787659    4673 pod_ready.go:92] pod "kube-proxy-tz5h5" in "kube-system" namespace has status "Ready":"True"
	I0728 18:47:15.787677    4673 pod_ready.go:81] duration metric: took 401.821881ms for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:15.787691    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:15.983559    4673 request.go:629] Waited for 195.822935ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:47:15.983705    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:47:15.983717    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:15.983728    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:15.983735    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:15.987299    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:47:15.987313    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:15.987321    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:16 GMT
	I0728 18:47:15.987326    4673 round_trippers.go:580]     Audit-Id: 423ac02c-f784-439d-afd9-1211747620f0
	I0728 18:47:15.987330    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:15.987334    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:15.987338    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:15.987341    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:15.987461    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-362000","namespace":"kube-system","uid":"0299d0c0-d45d-45ee-9b8e-b5900e92694b","resourceVersion":"970","creationTimestamp":"2024-07-29T01:39:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.mirror":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.seen":"2024-07-29T01:39:50.867492603Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0728 18:47:16.184561    4673 request.go:629] Waited for 196.795647ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:47:16.184632    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:47:16.184641    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:16.184649    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:16.184653    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:16.186556    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:16.186566    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:16.186572    4673 round_trippers.go:580]     Audit-Id: 06ab980b-4618-4bf8-8e74-003110516b4c
	I0728 18:47:16.186580    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:16.186584    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:16.186586    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:16.186588    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:16.186591    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:16 GMT
	I0728 18:47:16.186745    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:47:16.186963    4673 pod_ready.go:92] pod "kube-scheduler-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:47:16.186976    4673 pod_ready.go:81] duration metric: took 399.277206ms for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:16.186983    4673 pod_ready.go:38] duration metric: took 1.599967105s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:47:16.186996    4673 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 18:47:16.187051    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:47:16.198623    4673 system_svc.go:56] duration metric: took 11.624441ms WaitForService to wait for kubelet
	I0728 18:47:16.198637    4673 kubeadm.go:582] duration metric: took 16.793260637s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:47:16.198652    4673 node_conditions.go:102] verifying NodePressure condition ...
	I0728 18:47:16.383957    4673 request.go:629] Waited for 185.226784ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0728 18:47:16.384059    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0728 18:47:16.384070    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:16.384082    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:16.384102    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:16.386872    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:16.386887    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:16.386893    4673 round_trippers.go:580]     Audit-Id: 907b1c76-0ee3-463e-bfb3-31e9378c32f1
	I0728 18:47:16.386898    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:16.386902    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:16.386907    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:16.386910    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:16.386913    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:16 GMT
	I0728 18:47:16.387073    4673 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1116"},"items":[{"metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 15041 chars]
	I0728 18:47:16.387619    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:47:16.387630    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:47:16.387638    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:47:16.387643    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:47:16.387647    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:47:16.387651    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:47:16.387655    4673 node_conditions.go:105] duration metric: took 188.998592ms to run NodePressure ...
	I0728 18:47:16.387664    4673 start.go:241] waiting for startup goroutines ...
	I0728 18:47:16.387687    4673 start.go:255] writing updated cluster config ...
	I0728 18:47:16.409997    4673 out.go:177] 
	I0728 18:47:16.432381    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:47:16.432513    4673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:47:16.455885    4673 out.go:177] * Starting "multinode-362000-m03" worker node in "multinode-362000" cluster
	I0728 18:47:16.497765    4673 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:47:16.497798    4673 cache.go:56] Caching tarball of preloaded images
	I0728 18:47:16.497969    4673 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 18:47:16.497987    4673 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:47:16.498110    4673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:47:16.499231    4673 start.go:360] acquireMachinesLock for multinode-362000-m03: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:47:16.499326    4673 start.go:364] duration metric: took 72.314µs to acquireMachinesLock for "multinode-362000-m03"
	I0728 18:47:16.499361    4673 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:47:16.499368    4673 fix.go:54] fixHost starting: m03
	I0728 18:47:16.499775    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:47:16.499793    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:47:16.508921    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52914
	I0728 18:47:16.509260    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:47:16.509603    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:47:16.509619    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:47:16.509824    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:47:16.509940    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:47:16.510032    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetState
	I0728 18:47:16.510105    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:47:16.510192    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | hyperkit pid from json: 4633
	I0728 18:47:16.511099    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | hyperkit pid 4633 missing from process table
	I0728 18:47:16.511123    4673 fix.go:112] recreateIfNeeded on multinode-362000-m03: state=Stopped err=<nil>
	I0728 18:47:16.511131    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	W0728 18:47:16.511218    4673 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:47:16.532741    4673 out.go:177] * Restarting existing hyperkit VM for "multinode-362000-m03" ...
	I0728 18:47:16.574660    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .Start
	I0728 18:47:16.574958    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:47:16.574986    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/hyperkit.pid
	I0728 18:47:16.575072    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | Using UUID 5cda4f36-38f7-4c06-808b-dbe144e26e44
	I0728 18:47:16.603696    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | Generated MAC 3e:8b:c4:58:a6:30
	I0728 18:47:16.603718    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000
	I0728 18:47:16.603879    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5cda4f36-38f7-4c06-808b-dbe144e26e44", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ab590)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0728 18:47:16.603919    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5cda4f36-38f7-4c06-808b-dbe144e26e44", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ab590)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0728 18:47:16.603977    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5cda4f36-38f7-4c06-808b-dbe144e26e44", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/multinode-362000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/bzimage,/Users/j
enkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"}
	I0728 18:47:16.604012    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5cda4f36-38f7-4c06-808b-dbe144e26e44 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/multinode-362000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/mult
inode-362000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"
	I0728 18:47:16.604024    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 18:47:16.605431    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 DEBUG: hyperkit: Pid is 4703
	I0728 18:47:16.605787    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | Attempt 0
	I0728 18:47:16.605799    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:47:16.605867    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | hyperkit pid from json: 4703
	I0728 18:47:16.606925    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | Searching for 3e:8b:c4:58:a6:30 in /var/db/dhcpd_leases ...
	I0728 18:47:16.606996    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0728 18:47:16.607012    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a84606}
	I0728 18:47:16.607042    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a845cb}
	I0728 18:47:16.607066    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a6f430}
	I0728 18:47:16.607077    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | Found match: 3e:8b:c4:58:a6:30
	I0728 18:47:16.607087    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | IP: 192.169.0.15
	I0728 18:47:16.607106    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetConfigRaw
	I0728 18:47:16.607808    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetIP
	I0728 18:47:16.607986    4673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:47:16.608522    4673 machine.go:94] provisionDockerMachine start ...
	I0728 18:47:16.608533    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:47:16.608656    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:16.608781    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:16.608912    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:16.609013    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:16.609122    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:16.609255    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:47:16.609415    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:47:16.609422    4673 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 18:47:16.613511    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 18:47:16.622762    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 18:47:16.623774    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:47:16.623792    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:47:16.623801    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:47:16.623812    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:47:17.008257    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 18:47:17.008273    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 18:47:17.123001    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:47:17.123021    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:47:17.123037    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:47:17.123048    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:47:17.123860    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 18:47:17.123871    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 18:47:22.750018    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:22 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0728 18:47:22.750121    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:22 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0728 18:47:22.750132    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:22 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0728 18:47:22.774448    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:22 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0728 18:47:51.682863    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0728 18:47:51.682877    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetMachineName
	I0728 18:47:51.683017    4673 buildroot.go:166] provisioning hostname "multinode-362000-m03"
	I0728 18:47:51.683029    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetMachineName
	I0728 18:47:51.683125    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:51.683238    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:51.683325    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:51.683424    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:51.683514    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:51.683649    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:47:51.683794    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:47:51.683802    4673 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-362000-m03 && echo "multinode-362000-m03" | sudo tee /etc/hostname
	I0728 18:47:51.758129    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-362000-m03
	
	I0728 18:47:51.758143    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:51.758275    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:51.758370    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:51.758462    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:51.758558    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:51.758703    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:47:51.758849    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:47:51.758861    4673 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-362000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-362000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-362000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 18:47:51.831272    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:47:51.831287    4673 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 18:47:51.831295    4673 buildroot.go:174] setting up certificates
	I0728 18:47:51.831313    4673 provision.go:84] configureAuth start
	I0728 18:47:51.831320    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetMachineName
	I0728 18:47:51.831485    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetIP
	I0728 18:47:51.831587    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:51.831665    4673 provision.go:143] copyHostCerts
	I0728 18:47:51.831695    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:47:51.831749    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 18:47:51.831755    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:47:51.831890    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 18:47:51.832106    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:47:51.832136    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 18:47:51.832140    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:47:51.832279    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 18:47:51.832439    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:47:51.832472    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 18:47:51.832477    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:47:51.832550    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 18:47:51.832700    4673 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.multinode-362000-m03 san=[127.0.0.1 192.169.0.15 localhost minikube multinode-362000-m03]
	I0728 18:47:51.967383    4673 provision.go:177] copyRemoteCerts
	I0728 18:47:51.967435    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 18:47:51.967450    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:51.967730    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:51.967885    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:51.967980    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:51.968076    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/id_rsa Username:docker}
	I0728 18:47:52.006868    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 18:47:52.006936    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 18:47:52.026208    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 18:47:52.026293    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0728 18:47:52.045582    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 18:47:52.045646    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 18:47:52.064975    4673 provision.go:87] duration metric: took 233.655646ms to configureAuth
	I0728 18:47:52.064988    4673 buildroot.go:189] setting minikube options for container-runtime
	I0728 18:47:52.065146    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:47:52.065160    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:47:52.065306    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:52.065397    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:52.065469    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:52.065546    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:52.065619    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:52.065732    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:47:52.065859    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:47:52.065866    4673 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 18:47:52.128687    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 18:47:52.128708    4673 buildroot.go:70] root file system type: tmpfs
	I0728 18:47:52.128791    4673 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 18:47:52.128801    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:52.128935    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:52.129020    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:52.129109    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:52.129197    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:52.129347    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:47:52.129507    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:47:52.129552    4673 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	Environment="NO_PROXY=192.169.0.13,192.169.0.14"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 18:47:52.202616    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	Environment=NO_PROXY=192.169.0.13,192.169.0.14
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 18:47:52.202634    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:52.202761    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:52.202854    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:52.202943    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:52.203055    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:52.203193    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:47:52.203331    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:47:52.203343    4673 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 18:47:53.789096    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0728 18:47:53.789113    4673 machine.go:97] duration metric: took 37.180851437s to provisionDockerMachine
	I0728 18:47:53.789121    4673 start.go:293] postStartSetup for "multinode-362000-m03" (driver="hyperkit")
	I0728 18:47:53.789135    4673 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 18:47:53.789145    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:47:53.789333    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 18:47:53.789347    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:53.789452    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:53.789550    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:53.789634    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:53.789730    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/id_rsa Username:docker}
	I0728 18:47:53.828141    4673 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 18:47:53.831204    4673 command_runner.go:130] > NAME=Buildroot
	I0728 18:47:53.831216    4673 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0728 18:47:53.831222    4673 command_runner.go:130] > ID=buildroot
	I0728 18:47:53.831238    4673 command_runner.go:130] > VERSION_ID=2023.02.9
	I0728 18:47:53.831245    4673 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0728 18:47:53.831301    4673 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 18:47:53.831313    4673 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 18:47:53.831397    4673 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 18:47:53.831578    4673 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 18:47:53.831585    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /etc/ssl/certs/15332.pem
	I0728 18:47:53.831741    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 18:47:53.838987    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:47:53.858744    4673 start.go:296] duration metric: took 69.608683ms for postStartSetup
	I0728 18:47:53.858765    4673 fix.go:56] duration metric: took 37.359667921s for fixHost
	I0728 18:47:53.858822    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:53.858947    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:53.859035    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:53.859110    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:53.859188    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:53.859302    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:47:53.859439    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:47:53.859446    4673 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0728 18:47:53.922478    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722217674.062640264
	
	I0728 18:47:53.922492    4673 fix.go:216] guest clock: 1722217674.062640264
	I0728 18:47:53.922499    4673 fix.go:229] Guest: 2024-07-28 18:47:54.062640264 -0700 PDT Remote: 2024-07-28 18:47:53.858772 -0700 PDT m=+135.476717707 (delta=203.868264ms)
	I0728 18:47:53.922510    4673 fix.go:200] guest clock delta is within tolerance: 203.868264ms
	I0728 18:47:53.922514    4673 start.go:83] releasing machines lock for "multinode-362000-m03", held for 37.423447127s
	I0728 18:47:53.922532    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:47:53.922671    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetIP
	I0728 18:47:53.946229    4673 out.go:177] * Found network options:
	I0728 18:47:53.965869    4673 out.go:177]   - NO_PROXY=192.169.0.13,192.169.0.14
	W0728 18:47:53.987057    4673 proxy.go:119] fail to check proxy env: Error ip not in block
	W0728 18:47:53.987087    4673 proxy.go:119] fail to check proxy env: Error ip not in block
	I0728 18:47:53.987107    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:47:53.987843    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:47:53.988040    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:47:53.988154    4673 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 18:47:53.988193    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	W0728 18:47:53.988228    4673 proxy.go:119] fail to check proxy env: Error ip not in block
	W0728 18:47:53.988251    4673 proxy.go:119] fail to check proxy env: Error ip not in block
	I0728 18:47:53.988346    4673 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0728 18:47:53.988398    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:53.988414    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:53.988612    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:53.988640    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:53.988801    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:53.988823    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:53.989022    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:53.989023    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/id_rsa Username:docker}
	I0728 18:47:53.989164    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/id_rsa Username:docker}
	I0728 18:47:54.023890    4673 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0728 18:47:54.024043    4673 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 18:47:54.024097    4673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 18:47:54.074254    4673 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0728 18:47:54.074369    4673 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0728 18:47:54.074394    4673 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 18:47:54.074406    4673 start.go:495] detecting cgroup driver to use...
	I0728 18:47:54.074508    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:47:54.089984    4673 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0728 18:47:54.090233    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 18:47:54.098518    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 18:47:54.106701    4673 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 18:47:54.106755    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 18:47:54.115138    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:47:54.123668    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 18:47:54.131990    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:47:54.140587    4673 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 18:47:54.149140    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 18:47:54.157581    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 18:47:54.166150    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 18:47:54.174714    4673 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 18:47:54.182559    4673 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0728 18:47:54.182644    4673 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 18:47:54.190271    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:47:54.298866    4673 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 18:47:54.318067    4673 start.go:495] detecting cgroup driver to use...
	I0728 18:47:54.318140    4673 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 18:47:54.338026    4673 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0728 18:47:54.338581    4673 command_runner.go:130] > [Unit]
	I0728 18:47:54.338591    4673 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 18:47:54.338596    4673 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 18:47:54.338601    4673 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0728 18:47:54.338605    4673 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0728 18:47:54.338609    4673 command_runner.go:130] > StartLimitBurst=3
	I0728 18:47:54.338612    4673 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 18:47:54.338620    4673 command_runner.go:130] > [Service]
	I0728 18:47:54.338625    4673 command_runner.go:130] > Type=notify
	I0728 18:47:54.338628    4673 command_runner.go:130] > Restart=on-failure
	I0728 18:47:54.338632    4673 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0728 18:47:54.338636    4673 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13,192.169.0.14
	I0728 18:47:54.338642    4673 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 18:47:54.338650    4673 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 18:47:54.338656    4673 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 18:47:54.338662    4673 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 18:47:54.338668    4673 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 18:47:54.338673    4673 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 18:47:54.338683    4673 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 18:47:54.338690    4673 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 18:47:54.338695    4673 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 18:47:54.338698    4673 command_runner.go:130] > ExecStart=
	I0728 18:47:54.338710    4673 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0728 18:47:54.338716    4673 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 18:47:54.338726    4673 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 18:47:54.338745    4673 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 18:47:54.338752    4673 command_runner.go:130] > LimitNOFILE=infinity
	I0728 18:47:54.338756    4673 command_runner.go:130] > LimitNPROC=infinity
	I0728 18:47:54.338760    4673 command_runner.go:130] > LimitCORE=infinity
	I0728 18:47:54.338765    4673 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 18:47:54.338769    4673 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 18:47:54.338773    4673 command_runner.go:130] > TasksMax=infinity
	I0728 18:47:54.338782    4673 command_runner.go:130] > TimeoutStartSec=0
	I0728 18:47:54.338789    4673 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 18:47:54.338792    4673 command_runner.go:130] > Delegate=yes
	I0728 18:47:54.338803    4673 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 18:47:54.338807    4673 command_runner.go:130] > KillMode=process
	I0728 18:47:54.338809    4673 command_runner.go:130] > [Install]
	I0728 18:47:54.338813    4673 command_runner.go:130] > WantedBy=multi-user.target
	I0728 18:47:54.338880    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:47:54.349724    4673 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 18:47:54.369917    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:47:54.380285    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:47:54.390909    4673 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 18:47:54.414303    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:47:54.425462    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:47:54.439971    4673 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 18:47:54.440174    4673 ssh_runner.go:195] Run: which cri-dockerd
	I0728 18:47:54.442948    4673 command_runner.go:130] > /usr/bin/cri-dockerd
	I0728 18:47:54.443108    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 18:47:54.450126    4673 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 18:47:54.463499    4673 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 18:47:54.556646    4673 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 18:47:54.662379    4673 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 18:47:54.662402    4673 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 18:47:54.677242    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:47:54.768476    4673 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:48:55.813148    4673 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0728 18:48:55.813163    4673 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0728 18:48:55.813241    4673 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.045192266s)
	I0728 18:48:55.813322    4673 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0728 18:48:55.822236    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0728 18:48:55.822250    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:51.497333097Z" level=info msg="Starting up"
	I0728 18:48:55.822264    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:51.497791961Z" level=info msg="containerd not running, starting managed containerd"
	I0728 18:48:55.822278    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:51.498335029Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	I0728 18:48:55.822288    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.516158090Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 18:48:55.822298    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531116014Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 18:48:55.822314    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531180338Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 18:48:55.822323    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531246321Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 18:48:55.822333    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531318847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 18:48:55.822344    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531481171Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 18:48:55.822353    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531529904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 18:48:55.822372    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531657072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 18:48:55.822385    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531697300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 18:48:55.822397    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531730875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 18:48:55.822407    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531760248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 18:48:55.822417    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531885342Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 18:48:55.822426    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.532079562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 18:48:55.822441    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533663897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 18:48:55.822450    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533709153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 18:48:55.822590    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533830614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 18:48:55.822605    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533871544Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 18:48:55.822615    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.534025855Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 18:48:55.822624    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.534095225Z" level=info msg="metadata content store policy set" policy=shared
	I0728 18:48:55.822633    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535457940Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 18:48:55.822641    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535509819Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 18:48:55.822649    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535544130Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 18:48:55.822660    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535582591Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 18:48:55.822670    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535616821Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 18:48:55.822679    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535678991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 18:48:55.822688    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535893163Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 18:48:55.822697    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535972460Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 18:48:55.822706    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536011449Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 18:48:55.822716    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536084022Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 18:48:55.822726    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536119994Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 18:48:55.822738    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536150433Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 18:48:55.822748    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536180092Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 18:48:55.822757    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536209848Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 18:48:55.822768    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536239441Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 18:48:55.822777    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536268585Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 18:48:55.822890    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536297017Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 18:48:55.822902    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536324822Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 18:48:55.822911    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536369752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822923    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536404061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822932    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536433648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822940    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536477196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822950    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536515276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822959    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536547653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822968    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536576577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822977    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536605955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822986    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536635251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822995    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536665832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.823004    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536694177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.823013    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536722442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.823022    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536752762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.823031    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536783569Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 18:48:55.823040    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536818503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.823049    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536849022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.823058    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536877256Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 18:48:55.823067    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536948425Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 18:48:55.823081    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536992137Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 18:48:55.823091    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537090826Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 18:48:55.823215    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537127999Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 18:48:55.823228    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537156657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.823241    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537187154Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 18:48:55.823249    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537219245Z" level=info msg="NRI interface is disabled by configuration."
	I0728 18:48:55.823258    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537399754Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 18:48:55.823266    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537483452Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 18:48:55.823274    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537565490Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 18:48:55.823282    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537629407Z" level=info msg="containerd successfully booted in 0.022253s"
	I0728 18:48:55.823290    4673 command_runner.go:130] > Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.517443604Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 18:48:55.823298    4673 command_runner.go:130] > Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.531581234Z" level=info msg="Loading containers: start."
	I0728 18:48:55.823317    4673 command_runner.go:130] > Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.625199277Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 18:48:55.823327    4673 command_runner.go:130] > Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.689684132Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0728 18:48:55.823336    4673 command_runner.go:130] > Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.891648560Z" level=info msg="Loading containers: done."
	I0728 18:48:55.823349    4673 command_runner.go:130] > Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.906046920Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 18:48:55.823357    4673 command_runner.go:130] > Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.906215109Z" level=info msg="Daemon has completed initialization"
	I0728 18:48:55.823366    4673 command_runner.go:130] > Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.927454157Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 18:48:55.823380    4673 command_runner.go:130] > Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.927719311Z" level=info msg="API listen on [::]:2376"
	I0728 18:48:55.823390    4673 command_runner.go:130] > Jul 29 01:47:53 multinode-362000-m03 systemd[1]: Started Docker Application Container Engine.
	I0728 18:48:55.823398    4673 command_runner.go:130] > Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.927200063Z" level=info msg="Processing signal 'terminated'"
	I0728 18:48:55.823409    4673 command_runner.go:130] > Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928060039Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 18:48:55.823417    4673 command_runner.go:130] > Jul 29 01:47:54 multinode-362000-m03 systemd[1]: Stopping Docker Application Container Engine...
	I0728 18:48:55.823425    4673 command_runner.go:130] > Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928240054Z" level=info msg="Daemon shutdown complete"
	I0728 18:48:55.823435    4673 command_runner.go:130] > Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928277964Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 18:48:55.823465    4673 command_runner.go:130] > Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928289772Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 18:48:55.823472    4673 command_runner.go:130] > Jul 29 01:47:55 multinode-362000-m03 systemd[1]: docker.service: Deactivated successfully.
	I0728 18:48:55.823478    4673 command_runner.go:130] > Jul 29 01:47:55 multinode-362000-m03 systemd[1]: Stopped Docker Application Container Engine.
	I0728 18:48:55.823484    4673 command_runner.go:130] > Jul 29 01:47:55 multinode-362000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0728 18:48:55.823491    4673 command_runner.go:130] > Jul 29 01:47:55 multinode-362000-m03 dockerd[848]: time="2024-07-29T01:47:55.965954327Z" level=info msg="Starting up"
	I0728 18:48:55.823501    4673 command_runner.go:130] > Jul 29 01:48:55 multinode-362000-m03 dockerd[848]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0728 18:48:55.823510    4673 command_runner.go:130] > Jul 29 01:48:55 multinode-362000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0728 18:48:55.823528    4673 command_runner.go:130] > Jul 29 01:48:55 multinode-362000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0728 18:48:55.823540    4673 command_runner.go:130] > Jul 29 01:48:55 multinode-362000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	I0728 18:48:55.848031    4673 out.go:177] 
	W0728 18:48:55.868655    4673 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 01:47:51 multinode-362000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:47:51 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:51.497333097Z" level=info msg="Starting up"
	Jul 29 01:47:51 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:51.497791961Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 01:47:51 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:51.498335029Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.516158090Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531116014Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531180338Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531246321Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531318847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531481171Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531529904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531657072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531697300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531730875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531760248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531885342Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.532079562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533663897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533709153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533830614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533871544Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.534025855Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.534095225Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535457940Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535509819Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535544130Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535582591Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535616821Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535678991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535893163Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535972460Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536011449Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536084022Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536119994Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536150433Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536180092Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536209848Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536239441Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536268585Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536297017Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536324822Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536369752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536404061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536433648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536477196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536515276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536547653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536576577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536605955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536635251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536665832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536694177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536722442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536752762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536783569Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536818503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536849022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536877256Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536948425Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536992137Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537090826Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537127999Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537156657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537187154Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537219245Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537399754Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537483452Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537565490Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537629407Z" level=info msg="containerd successfully booted in 0.022253s"
	Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.517443604Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.531581234Z" level=info msg="Loading containers: start."
	Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.625199277Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.689684132Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.891648560Z" level=info msg="Loading containers: done."
	Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.906046920Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.906215109Z" level=info msg="Daemon has completed initialization"
	Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.927454157Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.927719311Z" level=info msg="API listen on [::]:2376"
	Jul 29 01:47:53 multinode-362000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.927200063Z" level=info msg="Processing signal 'terminated'"
	Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928060039Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 01:47:54 multinode-362000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928240054Z" level=info msg="Daemon shutdown complete"
	Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928277964Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928289772Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 01:47:55 multinode-362000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 01:47:55 multinode-362000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 01:47:55 multinode-362000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:47:55 multinode-362000-m03 dockerd[848]: time="2024-07-29T01:47:55.965954327Z" level=info msg="Starting up"
	Jul 29 01:48:55 multinode-362000-m03 dockerd[848]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 01:48:55 multinode-362000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 01:48:55 multinode-362000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 01:48:55 multinode-362000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 01:47:51 multinode-362000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:47:51 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:51.497333097Z" level=info msg="Starting up"
	Jul 29 01:47:51 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:51.497791961Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 01:47:51 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:51.498335029Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.516158090Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531116014Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531180338Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531246321Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531318847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531481171Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531529904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531657072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531697300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531730875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531760248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531885342Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.532079562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533663897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533709153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533830614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533871544Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.534025855Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.534095225Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535457940Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535509819Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535544130Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535582591Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535616821Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535678991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535893163Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535972460Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536011449Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536084022Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536119994Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536150433Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536180092Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536209848Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536239441Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536268585Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536297017Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536324822Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536369752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536404061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536433648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536477196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536515276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536547653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536576577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536605955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536635251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536665832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536694177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536722442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536752762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536783569Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536818503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536849022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536877256Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536948425Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536992137Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537090826Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537127999Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537156657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537187154Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537219245Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537399754Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537483452Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537565490Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537629407Z" level=info msg="containerd successfully booted in 0.022253s"
	Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.517443604Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.531581234Z" level=info msg="Loading containers: start."
	Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.625199277Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.689684132Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.891648560Z" level=info msg="Loading containers: done."
	Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.906046920Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.906215109Z" level=info msg="Daemon has completed initialization"
	Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.927454157Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.927719311Z" level=info msg="API listen on [::]:2376"
	Jul 29 01:47:53 multinode-362000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.927200063Z" level=info msg="Processing signal 'terminated'"
	Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928060039Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 01:47:54 multinode-362000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928240054Z" level=info msg="Daemon shutdown complete"
	Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928277964Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928289772Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 01:47:55 multinode-362000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 01:47:55 multinode-362000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 01:47:55 multinode-362000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:47:55 multinode-362000-m03 dockerd[848]: time="2024-07-29T01:47:55.965954327Z" level=info msg="Starting up"
	Jul 29 01:48:55 multinode-362000-m03 dockerd[848]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 01:48:55 multinode-362000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 01:48:55 multinode-362000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 01:48:55 multinode-362000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0728 18:48:55.868782    4673 out.go:239] * 
	* 
	W0728 18:48:55.870089    4673 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:48:55.931441    4673 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-362000" : exit status 90
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-362000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-362000 -n multinode-362000
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-362000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-362000 logs -n 25: (2.852693233s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p mount-start-2-934000                           | mount-start-2-934000 | jenkins | v1.33.1 | 28 Jul 24 18:39 PDT | 28 Jul 24 18:39 PDT |
	| delete  | -p mount-start-1-925000                           | mount-start-1-925000 | jenkins | v1.33.1 | 28 Jul 24 18:39 PDT | 28 Jul 24 18:39 PDT |
	| start   | -p multinode-362000                               | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:39 PDT | 28 Jul 24 18:41 PDT |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=hyperkit                                 |                      |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- apply -f                   | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- rollout                    | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- get pods -o                | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- get pods -o                | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-8hq8g --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-svnlx --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-8hq8g --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-svnlx --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-8hq8g -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-svnlx -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- get pods -o                | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-8hq8g                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-8hq8g -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-svnlx                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-362000 -- exec                       | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT | 28 Jul 24 18:41 PDT |
	|         | busybox-fc5497c4f-svnlx -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1                          |                      |         |         |                     |                     |
	| node    | add -p multinode-362000 -v 3                      | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:41 PDT |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	| node    | multinode-362000 node stop m03                    | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:42 PDT | 28 Jul 24 18:42 PDT |
	| node    | multinode-362000 node start                       | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:42 PDT | 28 Jul 24 18:45 PDT |
	|         | m03 -v=7 --alsologtostderr                        |                      |         |         |                     |                     |
	| node    | list -p multinode-362000                          | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:45 PDT |                     |
	| stop    | -p multinode-362000                               | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:45 PDT | 28 Jul 24 18:45 PDT |
	| start   | -p multinode-362000                               | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:45 PDT |                     |
	|         | --wait=true -v=8                                  |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	| node    | list -p multinode-362000                          | multinode-362000     | jenkins | v1.33.1 | 28 Jul 24 18:48 PDT |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/28 18:45:38
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 18:45:38.417840    4673 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:45:38.418019    4673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:45:38.418024    4673 out.go:304] Setting ErrFile to fd 2...
	I0728 18:45:38.418028    4673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:45:38.418193    4673 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 18:45:38.419696    4673 out.go:298] Setting JSON to false
	I0728 18:45:38.442261    4673 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4509,"bootTime":1722213029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 18:45:38.442355    4673 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:45:38.464048    4673 out.go:177] * [multinode-362000] minikube v1.33.1 on Darwin 14.5
	I0728 18:45:38.505773    4673 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:45:38.505813    4673 notify.go:220] Checking for updates...
	I0728 18:45:38.548494    4673 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:45:38.569795    4673 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 18:45:38.592752    4673 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:45:38.613666    4673 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 18:45:38.634551    4673 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:45:38.656368    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:45:38.656509    4673 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:45:38.656991    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:45:38.657052    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:45:38.666154    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52847
	I0728 18:45:38.666504    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:45:38.666920    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:45:38.666929    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:45:38.667143    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:45:38.667270    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:38.695663    4673 out.go:177] * Using the hyperkit driver based on existing profile
	I0728 18:45:38.737553    4673 start.go:297] selected driver: hyperkit
	I0728 18:45:38.737606    4673 start.go:901] validating driver "hyperkit" against &{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:45:38.737810    4673 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:45:38.737978    4673 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:45:38.738185    4673 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 18:45:38.747689    4673 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 18:45:38.751451    4673 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:45:38.751476    4673 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 18:45:38.754139    4673 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:45:38.754175    4673 cni.go:84] Creating CNI manager for ""
	I0728 18:45:38.754182    4673 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0728 18:45:38.754259    4673 start.go:340] cluster config:
	{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:45:38.754352    4673 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:45:38.796638    4673 out.go:177] * Starting "multinode-362000" primary control-plane node in "multinode-362000" cluster
	I0728 18:45:38.817741    4673 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:45:38.817811    4673 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 18:45:38.817836    4673 cache.go:56] Caching tarball of preloaded images
	I0728 18:45:38.818023    4673 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 18:45:38.818042    4673 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:45:38.818228    4673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:45:38.819184    4673 start.go:360] acquireMachinesLock for multinode-362000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:45:38.819306    4673 start.go:364] duration metric: took 97.069µs to acquireMachinesLock for "multinode-362000"
	I0728 18:45:38.819343    4673 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:45:38.819363    4673 fix.go:54] fixHost starting: 
	I0728 18:45:38.819803    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:45:38.819830    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:45:38.828721    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52849
	I0728 18:45:38.829083    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:45:38.829443    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:45:38.829454    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:45:38.829748    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:45:38.829914    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:38.830027    4673 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:45:38.830122    4673 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:45:38.830223    4673 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:45:38.831091    4673 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid 4468 missing from process table
	I0728 18:45:38.831121    4673 fix.go:112] recreateIfNeeded on multinode-362000: state=Stopped err=<nil>
	I0728 18:45:38.831135    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	W0728 18:45:38.831223    4673 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:45:38.872402    4673 out.go:177] * Restarting existing hyperkit VM for "multinode-362000" ...
	I0728 18:45:38.893469    4673 main.go:141] libmachine: (multinode-362000) Calling .Start
	I0728 18:45:38.893764    4673 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:45:38.893802    4673 main.go:141] libmachine: (multinode-362000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/hyperkit.pid
	I0728 18:45:38.895528    4673 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid 4468 missing from process table
	I0728 18:45:38.895559    4673 main.go:141] libmachine: (multinode-362000) DBG | pid 4468 is in state "Stopped"
	I0728 18:45:38.895596    4673 main.go:141] libmachine: (multinode-362000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/hyperkit.pid...
	I0728 18:45:38.896013    4673 main.go:141] libmachine: (multinode-362000) DBG | Using UUID 8122a2e4-0139-4f45-b808-288a2b40595b
	I0728 18:45:39.005368    4673 main.go:141] libmachine: (multinode-362000) DBG | Generated MAC e:8c:86:9:55:cf
	I0728 18:45:39.005393    4673 main.go:141] libmachine: (multinode-362000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000
	I0728 18:45:39.005522    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8122a2e4-0139-4f45-b808-288a2b40595b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ae4e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0728 18:45:39.005558    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8122a2e4-0139-4f45-b808-288a2b40595b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ae4e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0728 18:45:39.005591    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8122a2e4-0139-4f45-b808-288a2b40595b", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/multinode-362000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/bzimage,/Users/jenkins/minikube-integration/1931
2-1006/.minikube/machines/multinode-362000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"}
	I0728 18:45:39.005622    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8122a2e4-0139-4f45-b808-288a2b40595b -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/multinode-362000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"
	I0728 18:45:39.005634    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 18:45:39.007125    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 DEBUG: hyperkit: Pid is 4686
	I0728 18:45:39.007618    4673 main.go:141] libmachine: (multinode-362000) DBG | Attempt 0
	I0728 18:45:39.007633    4673 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:45:39.007728    4673 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4686
	I0728 18:45:39.009765    4673 main.go:141] libmachine: (multinode-362000) DBG | Searching for e:8c:86:9:55:cf in /var/db/dhcpd_leases ...
	I0728 18:45:39.009810    4673 main.go:141] libmachine: (multinode-362000) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0728 18:45:39.009837    4673 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a6f430}
	I0728 18:45:39.009858    4673 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a84496}
	I0728 18:45:39.009873    4673 main.go:141] libmachine: (multinode-362000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a84455}
	I0728 18:45:39.009887    4673 main.go:141] libmachine: (multinode-362000) DBG | Found match: e:8c:86:9:55:cf
	I0728 18:45:39.009900    4673 main.go:141] libmachine: (multinode-362000) DBG | IP: 192.169.0.13
	I0728 18:45:39.009946    4673 main.go:141] libmachine: (multinode-362000) Calling .GetConfigRaw
	I0728 18:45:39.010720    4673 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:45:39.010962    4673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:45:39.011561    4673 machine.go:94] provisionDockerMachine start ...
	I0728 18:45:39.011574    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:39.011710    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:39.011855    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:39.011973    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:39.012065    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:39.012173    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:39.012309    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:45:39.012528    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:45:39.012539    4673 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 18:45:39.015353    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 18:45:39.067526    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 18:45:39.068273    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:45:39.068290    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:45:39.068303    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:45:39.068309    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:45:39.451282    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 18:45:39.451295    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 18:45:39.565812    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:45:39.565831    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:45:39.565861    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:45:39.565873    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:45:39.566705    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 18:45:39.566719    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:39 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 18:45:45.138571    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0728 18:45:45.138644    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0728 18:45:45.138656    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0728 18:45:45.162462    4673 main.go:141] libmachine: (multinode-362000) DBG | 2024/07/28 18:45:45 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0728 18:45:50.070800    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0728 18:45:50.070814    4673 main.go:141] libmachine: (multinode-362000) Calling .GetMachineName
	I0728 18:45:50.070951    4673 buildroot.go:166] provisioning hostname "multinode-362000"
	I0728 18:45:50.070963    4673 main.go:141] libmachine: (multinode-362000) Calling .GetMachineName
	I0728 18:45:50.071066    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:50.071167    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:50.071260    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.071343    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.071434    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:50.071571    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:45:50.071712    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:45:50.071726    4673 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-362000 && echo "multinode-362000" | sudo tee /etc/hostname
	I0728 18:45:50.134854    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-362000
	
	I0728 18:45:50.134872    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:50.134997    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:50.135116    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.135200    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.135297    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:50.135423    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:45:50.135563    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:45:50.135574    4673 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-362000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-362000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-362000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 18:45:50.196846    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:45:50.196869    4673 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 18:45:50.196892    4673 buildroot.go:174] setting up certificates
	I0728 18:45:50.196906    4673 provision.go:84] configureAuth start
	I0728 18:45:50.196914    4673 main.go:141] libmachine: (multinode-362000) Calling .GetMachineName
	I0728 18:45:50.197054    4673 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:45:50.197156    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:50.197243    4673 provision.go:143] copyHostCerts
	I0728 18:45:50.197277    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:45:50.197358    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 18:45:50.197367    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:45:50.197515    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 18:45:50.197722    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:45:50.197765    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 18:45:50.197769    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:45:50.197852    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 18:45:50.198031    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:45:50.198074    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 18:45:50.198079    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:45:50.198172    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 18:45:50.198353    4673 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.multinode-362000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-362000]
	I0728 18:45:50.322970    4673 provision.go:177] copyRemoteCerts
	I0728 18:45:50.323026    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 18:45:50.323055    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:50.323169    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:50.323269    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.323356    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:50.323453    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:45:50.356787    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 18:45:50.356852    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 18:45:50.375891    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 18:45:50.375948    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0728 18:45:50.394763    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 18:45:50.394825    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 18:45:50.414207    4673 provision.go:87] duration metric: took 217.291265ms to configureAuth
	I0728 18:45:50.414219    4673 buildroot.go:189] setting minikube options for container-runtime
	I0728 18:45:50.414383    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:45:50.414397    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:50.414539    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:50.414635    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:50.414726    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.414802    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.414885    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:50.414986    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:45:50.415110    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:45:50.415118    4673 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 18:45:50.467473    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 18:45:50.467486    4673 buildroot.go:70] root file system type: tmpfs
	I0728 18:45:50.467551    4673 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 18:45:50.467567    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:50.467707    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:50.467803    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.467913    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.468006    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:50.468136    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:45:50.468282    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:45:50.468326    4673 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 18:45:50.530974    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 18:45:50.531001    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:50.531127    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:50.531214    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.531298    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:50.531411    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:50.531541    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:45:50.531694    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:45:50.531706    4673 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 18:45:52.175000    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0728 18:45:52.175015    4673 machine.go:97] duration metric: took 13.163539557s to provisionDockerMachine
	I0728 18:45:52.175026    4673 start.go:293] postStartSetup for "multinode-362000" (driver="hyperkit")
	I0728 18:45:52.175033    4673 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 18:45:52.175047    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:52.175252    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 18:45:52.175266    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:52.175354    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:52.175448    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:52.175556    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:52.175637    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:45:52.213189    4673 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 18:45:52.217247    4673 command_runner.go:130] > NAME=Buildroot
	I0728 18:45:52.217257    4673 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0728 18:45:52.217261    4673 command_runner.go:130] > ID=buildroot
	I0728 18:45:52.217265    4673 command_runner.go:130] > VERSION_ID=2023.02.9
	I0728 18:45:52.217277    4673 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0728 18:45:52.217385    4673 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 18:45:52.217398    4673 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 18:45:52.217506    4673 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 18:45:52.217702    4673 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 18:45:52.217709    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /etc/ssl/certs/15332.pem
	I0728 18:45:52.217927    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 18:45:52.228721    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:45:52.261274    4673 start.go:296] duration metric: took 86.240044ms for postStartSetup
	I0728 18:45:52.261300    4673 fix.go:56] duration metric: took 13.442043564s for fixHost
	I0728 18:45:52.261313    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:52.261436    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:52.261529    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:52.261617    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:52.261699    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:52.261853    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:45:52.261989    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0728 18:45:52.261996    4673 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0728 18:45:52.314183    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722217552.447141799
	
	I0728 18:45:52.314194    4673 fix.go:216] guest clock: 1722217552.447141799
	I0728 18:45:52.314199    4673 fix.go:229] Guest: 2024-07-28 18:45:52.447141799 -0700 PDT Remote: 2024-07-28 18:45:52.261303 -0700 PDT m=+13.878368752 (delta=185.838799ms)
	I0728 18:45:52.314216    4673 fix.go:200] guest clock delta is within tolerance: 185.838799ms
	I0728 18:45:52.314219    4673 start.go:83] releasing machines lock for "multinode-362000", held for 13.495000417s
	I0728 18:45:52.314238    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:52.314391    4673 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:45:52.314503    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:52.314872    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:52.314986    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:45:52.315084    4673 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 18:45:52.315119    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:52.315146    4673 ssh_runner.go:195] Run: cat /version.json
	I0728 18:45:52.315159    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:45:52.315212    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:52.315241    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:45:52.315346    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:52.315362    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:45:52.315425    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:52.315449    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:45:52.315513    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:45:52.315535    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:45:52.402603    4673 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0728 18:45:52.403517    4673 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0728 18:45:52.403709    4673 ssh_runner.go:195] Run: systemctl --version
	I0728 18:45:52.408812    4673 command_runner.go:130] > systemd 252 (252)
	I0728 18:45:52.408834    4673 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0728 18:45:52.409070    4673 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0728 18:45:52.413189    4673 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0728 18:45:52.413232    4673 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 18:45:52.413280    4673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 18:45:52.426525    4673 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0728 18:45:52.426622    4673 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 18:45:52.426631    4673 start.go:495] detecting cgroup driver to use...
	I0728 18:45:52.426735    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:45:52.441487    4673 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0728 18:45:52.441777    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 18:45:52.450602    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 18:45:52.459645    4673 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 18:45:52.459689    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 18:45:52.468580    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:45:52.477277    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 18:45:52.486024    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:45:52.494784    4673 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 18:45:52.503698    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 18:45:52.512471    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 18:45:52.521118    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 18:45:52.529925    4673 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 18:45:52.537899    4673 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0728 18:45:52.538051    4673 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 18:45:52.546207    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:45:52.648661    4673 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 18:45:52.668071    4673 start.go:495] detecting cgroup driver to use...
	I0728 18:45:52.668148    4673 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 18:45:52.681866    4673 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0728 18:45:52.682026    4673 command_runner.go:130] > [Unit]
	I0728 18:45:52.682036    4673 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 18:45:52.682044    4673 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 18:45:52.682050    4673 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0728 18:45:52.682054    4673 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0728 18:45:52.682058    4673 command_runner.go:130] > StartLimitBurst=3
	I0728 18:45:52.682062    4673 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 18:45:52.682066    4673 command_runner.go:130] > [Service]
	I0728 18:45:52.682069    4673 command_runner.go:130] > Type=notify
	I0728 18:45:52.682072    4673 command_runner.go:130] > Restart=on-failure
	I0728 18:45:52.682079    4673 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 18:45:52.682087    4673 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 18:45:52.682093    4673 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 18:45:52.682099    4673 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 18:45:52.682105    4673 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 18:45:52.682114    4673 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 18:45:52.682121    4673 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 18:45:52.682130    4673 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 18:45:52.682137    4673 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 18:45:52.682141    4673 command_runner.go:130] > ExecStart=
	I0728 18:45:52.682153    4673 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0728 18:45:52.682156    4673 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 18:45:52.682162    4673 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 18:45:52.682167    4673 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 18:45:52.682172    4673 command_runner.go:130] > LimitNOFILE=infinity
	I0728 18:45:52.682175    4673 command_runner.go:130] > LimitNPROC=infinity
	I0728 18:45:52.682179    4673 command_runner.go:130] > LimitCORE=infinity
	I0728 18:45:52.682185    4673 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 18:45:52.682190    4673 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 18:45:52.682193    4673 command_runner.go:130] > TasksMax=infinity
	I0728 18:45:52.682197    4673 command_runner.go:130] > TimeoutStartSec=0
	I0728 18:45:52.682202    4673 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 18:45:52.682205    4673 command_runner.go:130] > Delegate=yes
	I0728 18:45:52.682210    4673 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 18:45:52.682214    4673 command_runner.go:130] > KillMode=process
	I0728 18:45:52.682218    4673 command_runner.go:130] > [Install]
	I0728 18:45:52.682230    4673 command_runner.go:130] > WantedBy=multi-user.target
	I0728 18:45:52.682352    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:45:52.694437    4673 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 18:45:52.714095    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:45:52.724786    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:45:52.734755    4673 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 18:45:52.757057    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:45:52.767836    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:45:52.783282    4673 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 18:45:52.783636    4673 ssh_runner.go:195] Run: which cri-dockerd
	I0728 18:45:52.786451    4673 command_runner.go:130] > /usr/bin/cri-dockerd
	I0728 18:45:52.786625    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 18:45:52.793644    4673 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 18:45:52.807004    4673 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 18:45:52.902471    4673 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 18:45:52.993894    4673 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 18:45:52.993959    4673 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 18:45:53.008812    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:45:53.107610    4673 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:45:55.429561    4673 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.321948073s)
	I0728 18:45:55.429625    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0728 18:45:55.441155    4673 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0728 18:45:55.453910    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:45:55.464413    4673 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0728 18:45:55.559169    4673 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 18:45:55.663530    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:45:55.779347    4673 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0728 18:45:55.792910    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:45:55.803704    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:45:55.899175    4673 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0728 18:45:55.958796    4673 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 18:45:55.958854    4673 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 18:45:55.962856    4673 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0728 18:45:55.962869    4673 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0728 18:45:55.962874    4673 command_runner.go:130] > Device: 0,22	Inode: 747         Links: 1
	I0728 18:45:55.962888    4673 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0728 18:45:55.962896    4673 command_runner.go:130] > Access: 2024-07-29 01:45:56.043761094 +0000
	I0728 18:45:55.962904    4673 command_runner.go:130] > Modify: 2024-07-29 01:45:56.043761094 +0000
	I0728 18:45:55.962911    4673 command_runner.go:130] > Change: 2024-07-29 01:45:56.045760874 +0000
	I0728 18:45:55.962930    4673 command_runner.go:130] >  Birth: -
	I0728 18:45:55.962992    4673 start.go:563] Will wait 60s for crictl version
	I0728 18:45:55.963033    4673 ssh_runner.go:195] Run: which crictl
	I0728 18:45:55.965939    4673 command_runner.go:130] > /usr/bin/crictl
	I0728 18:45:55.966156    4673 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0728 18:45:55.993479    4673 command_runner.go:130] > Version:  0.1.0
	I0728 18:45:55.993491    4673 command_runner.go:130] > RuntimeName:  docker
	I0728 18:45:55.993495    4673 command_runner.go:130] > RuntimeVersion:  27.1.0
	I0728 18:45:55.993499    4673 command_runner.go:130] > RuntimeApiVersion:  v1
	I0728 18:45:55.994588    4673 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0728 18:45:55.994652    4673 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:45:56.012242    4673 command_runner.go:130] > 27.1.0
	I0728 18:45:56.012372    4673 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:45:56.030310    4673 command_runner.go:130] > 27.1.0
	I0728 18:45:56.071630    4673 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0728 18:45:56.071677    4673 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:45:56.072056    4673 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0728 18:45:56.076440    4673 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:45:56.085792    4673 kubeadm.go:883] updating cluster {Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0728 18:45:56.085876    4673 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:45:56.085938    4673 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 18:45:56.099093    4673 command_runner.go:130] > kindest/kindnetd:v20240719-e7903573
	I0728 18:45:56.099107    4673 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0728 18:45:56.099112    4673 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0728 18:45:56.099116    4673 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0728 18:45:56.099119    4673 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0728 18:45:56.099140    4673 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0728 18:45:56.099160    4673 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0728 18:45:56.099165    4673 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0728 18:45:56.099169    4673 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:45:56.099173    4673 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0728 18:45:56.099743    4673 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0728 18:45:56.099751    4673 docker.go:615] Images already preloaded, skipping extraction
	I0728 18:45:56.099827    4673 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 18:45:56.111107    4673 command_runner.go:130] > kindest/kindnetd:v20240719-e7903573
	I0728 18:45:56.111120    4673 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0728 18:45:56.111124    4673 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0728 18:45:56.111132    4673 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0728 18:45:56.111136    4673 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0728 18:45:56.111143    4673 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0728 18:45:56.111147    4673 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0728 18:45:56.111151    4673 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0728 18:45:56.111155    4673 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:45:56.111159    4673 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0728 18:45:56.111676    4673 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0728 18:45:56.111699    4673 cache_images.go:84] Images are preloaded, skipping loading
	I0728 18:45:56.111712    4673 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0728 18:45:56.111800    4673 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-362000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0728 18:45:56.111865    4673 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 18:45:56.146274    4673 command_runner.go:130] > cgroupfs
	I0728 18:45:56.146885    4673 cni.go:84] Creating CNI manager for ""
	I0728 18:45:56.146895    4673 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0728 18:45:56.146906    4673 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0728 18:45:56.146922    4673 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-362000 NodeName:multinode-362000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0728 18:45:56.147002    4673 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-362000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 18:45:56.147062    4673 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0728 18:45:56.154499    4673 command_runner.go:130] > kubeadm
	I0728 18:45:56.154508    4673 command_runner.go:130] > kubectl
	I0728 18:45:56.154512    4673 command_runner.go:130] > kubelet
	I0728 18:45:56.154526    4673 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 18:45:56.154570    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 18:45:56.161753    4673 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0728 18:45:56.175166    4673 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 18:45:56.188501    4673 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0728 18:45:56.201915    4673 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0728 18:45:56.204741    4673 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:45:56.213831    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:45:56.314251    4673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:45:56.327877    4673 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000 for IP: 192.169.0.13
	I0728 18:45:56.327888    4673 certs.go:194] generating shared ca certs ...
	I0728 18:45:56.327898    4673 certs.go:226] acquiring lock for ca certs: {Name:mk64aac07da96a39ae6165406ad142fbce2d0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:45:56.328070    4673 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key
	I0728 18:45:56.328149    4673 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key
	I0728 18:45:56.328160    4673 certs.go:256] generating profile certs ...
	I0728 18:45:56.328253    4673 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key
	I0728 18:45:56.328332    4673 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key.cf2f2b57
	I0728 18:45:56.328411    4673 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.key
	I0728 18:45:56.328419    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0728 18:45:56.328440    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0728 18:45:56.328458    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0728 18:45:56.328476    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0728 18:45:56.328493    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0728 18:45:56.328522    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0728 18:45:56.328552    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0728 18:45:56.328574    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0728 18:45:56.328677    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem (1338 bytes)
	W0728 18:45:56.328726    4673 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533_empty.pem, impossibly tiny 0 bytes
	I0728 18:45:56.328735    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem (1675 bytes)
	I0728 18:45:56.328769    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem (1078 bytes)
	I0728 18:45:56.328817    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem (1123 bytes)
	I0728 18:45:56.328854    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem (1679 bytes)
	I0728 18:45:56.328933    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:45:56.328968    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /usr/share/ca-certificates/15332.pem
	I0728 18:45:56.328989    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:45:56.329006    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem -> /usr/share/ca-certificates/1533.pem
	I0728 18:45:56.329433    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 18:45:56.360113    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 18:45:56.384683    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 18:45:56.414348    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 18:45:56.438537    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0728 18:45:56.458006    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 18:45:56.477011    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 18:45:56.496093    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0728 18:45:56.515234    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /usr/share/ca-certificates/15332.pem (1708 bytes)
	I0728 18:45:56.534555    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 18:45:56.553842    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem --> /usr/share/ca-certificates/1533.pem (1338 bytes)
	I0728 18:45:56.573050    4673 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 18:45:56.586612    4673 ssh_runner.go:195] Run: openssl version
	I0728 18:45:56.590610    4673 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0728 18:45:56.590830    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 18:45:56.599807    4673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:45:56.602970    4673 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:45:56.603135    4673 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:45:56.603178    4673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:45:56.607074    4673 command_runner.go:130] > b5213941
	I0728 18:45:56.607310    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 18:45:56.616281    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1533.pem && ln -fs /usr/share/ca-certificates/1533.pem /etc/ssl/certs/1533.pem"
	I0728 18:45:56.625173    4673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1533.pem
	I0728 18:45:56.628303    4673 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 00:57 /usr/share/ca-certificates/1533.pem
	I0728 18:45:56.628476    4673 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:57 /usr/share/ca-certificates/1533.pem
	I0728 18:45:56.628509    4673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1533.pem
	I0728 18:45:56.632415    4673 command_runner.go:130] > 51391683
	I0728 18:45:56.632627    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1533.pem /etc/ssl/certs/51391683.0"
	I0728 18:45:56.641669    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15332.pem && ln -fs /usr/share/ca-certificates/15332.pem /etc/ssl/certs/15332.pem"
	I0728 18:45:56.650722    4673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15332.pem
	I0728 18:45:56.653803    4673 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 00:57 /usr/share/ca-certificates/15332.pem
	I0728 18:45:56.653989    4673 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:57 /usr/share/ca-certificates/15332.pem
	I0728 18:45:56.654026    4673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15332.pem
	I0728 18:45:56.657897    4673 command_runner.go:130] > 3ec20f2e
	I0728 18:45:56.658048    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15332.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 18:45:56.666799    4673 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0728 18:45:56.669910    4673 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0728 18:45:56.669920    4673 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0728 18:45:56.669925    4673 command_runner.go:130] > Device: 253,1	Inode: 531528      Links: 1
	I0728 18:45:56.669936    4673 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0728 18:45:56.669941    4673 command_runner.go:130] > Access: 2024-07-29 01:39:47.972565447 +0000
	I0728 18:45:56.669946    4673 command_runner.go:130] > Modify: 2024-07-29 01:39:47.972565447 +0000
	I0728 18:45:56.669950    4673 command_runner.go:130] > Change: 2024-07-29 01:39:47.972565447 +0000
	I0728 18:45:56.669955    4673 command_runner.go:130] >  Birth: 2024-07-29 01:39:47.972565447 +0000
	I0728 18:45:56.670100    4673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0728 18:45:56.674117    4673 command_runner.go:130] > Certificate will not expire
	I0728 18:45:56.674335    4673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0728 18:45:56.678337    4673 command_runner.go:130] > Certificate will not expire
	I0728 18:45:56.678524    4673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0728 18:45:56.682543    4673 command_runner.go:130] > Certificate will not expire
	I0728 18:45:56.682745    4673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0728 18:45:56.686691    4673 command_runner.go:130] > Certificate will not expire
	I0728 18:45:56.686874    4673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0728 18:45:56.690811    4673 command_runner.go:130] > Certificate will not expire
	I0728 18:45:56.690989    4673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0728 18:45:56.694929    4673 command_runner.go:130] > Certificate will not expire
	I0728 18:45:56.695116    4673 kubeadm.go:392] StartCluster: {Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:45:56.695246    4673 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 18:45:56.707569    4673 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 18:45:56.715778    4673 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0728 18:45:56.715788    4673 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0728 18:45:56.715792    4673 command_runner.go:130] > /var/lib/minikube/etcd:
	I0728 18:45:56.715795    4673 command_runner.go:130] > member
	I0728 18:45:56.715913    4673 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0728 18:45:56.715924    4673 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0728 18:45:56.715960    4673 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 18:45:56.724101    4673 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 18:45:56.724439    4673 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-362000" does not appear in /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:45:56.724526    4673 kubeconfig.go:62] /Users/jenkins/minikube-integration/19312-1006/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-362000" cluster setting kubeconfig missing "multinode-362000" context setting]
	I0728 18:45:56.724729    4673 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/kubeconfig: {Name:mk76ac5b4283108fca1a66cc5cd0791fbea0691d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:45:56.725352    4673 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:45:56.725564    4673 kapi.go:59] client config for multinode-362000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10bd5b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:45:56.725891    4673 cert_rotation.go:137] Starting client certificate rotation controller
	I0728 18:45:56.726067    4673 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 18:45:56.733884    4673 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.13
	I0728 18:45:56.733899    4673 kubeadm.go:1160] stopping kube-system containers ...
	I0728 18:45:56.733958    4673 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 18:45:56.747508    4673 command_runner.go:130] > 4e01b33bc28c
	I0728 18:45:56.747520    4673 command_runner.go:130] > 1255904b9cda
	I0728 18:45:56.747524    4673 command_runner.go:130] > 28cbce0c6ed9
	I0728 18:45:56.747527    4673 command_runner.go:130] > de282e66d4c0
	I0728 18:45:56.747530    4673 command_runner.go:130] > a44317c7df72
	I0728 18:45:56.747543    4673 command_runner.go:130] > 473044afd6a2
	I0728 18:45:56.747547    4673 command_runner.go:130] > 3050e483a8a8
	I0728 18:45:56.747550    4673 command_runner.go:130] > a8dcd682eb59
	I0728 18:45:56.747553    4673 command_runner.go:130] > 898c4f8b2269
	I0728 18:45:56.747559    4673 command_runner.go:130] > f4075b746de3
	I0728 18:45:56.747564    4673 command_runner.go:130] > ef990ab76809
	I0728 18:45:56.747568    4673 command_runner.go:130] > e54a6e4f589e
	I0728 18:45:56.747571    4673 command_runner.go:130] > c5e0cac22c05
	I0728 18:45:56.747575    4673 command_runner.go:130] > 9bd37faa2f0a
	I0728 18:45:56.747578    4673 command_runner.go:130] > 1e7d4787a9c3
	I0728 18:45:56.747581    4673 command_runner.go:130] > 9ebd1495f389
	I0728 18:45:56.748134    4673 docker.go:483] Stopping containers: [4e01b33bc28c 1255904b9cda 28cbce0c6ed9 de282e66d4c0 a44317c7df72 473044afd6a2 3050e483a8a8 a8dcd682eb59 898c4f8b2269 f4075b746de3 ef990ab76809 e54a6e4f589e c5e0cac22c05 9bd37faa2f0a 1e7d4787a9c3 9ebd1495f389]
	I0728 18:45:56.748209    4673 ssh_runner.go:195] Run: docker stop 4e01b33bc28c 1255904b9cda 28cbce0c6ed9 de282e66d4c0 a44317c7df72 473044afd6a2 3050e483a8a8 a8dcd682eb59 898c4f8b2269 f4075b746de3 ef990ab76809 e54a6e4f589e c5e0cac22c05 9bd37faa2f0a 1e7d4787a9c3 9ebd1495f389
	I0728 18:45:56.760719    4673 command_runner.go:130] > 4e01b33bc28c
	I0728 18:45:56.760732    4673 command_runner.go:130] > 1255904b9cda
	I0728 18:45:56.760735    4673 command_runner.go:130] > 28cbce0c6ed9
	I0728 18:45:56.760947    4673 command_runner.go:130] > de282e66d4c0
	I0728 18:45:56.763002    4673 command_runner.go:130] > a44317c7df72
	I0728 18:45:56.764177    4673 command_runner.go:130] > 473044afd6a2
	I0728 18:45:56.764193    4673 command_runner.go:130] > 3050e483a8a8
	I0728 18:45:56.764198    4673 command_runner.go:130] > a8dcd682eb59
	I0728 18:45:56.764201    4673 command_runner.go:130] > 898c4f8b2269
	I0728 18:45:56.764205    4673 command_runner.go:130] > f4075b746de3
	I0728 18:45:56.764208    4673 command_runner.go:130] > ef990ab76809
	I0728 18:45:56.764211    4673 command_runner.go:130] > e54a6e4f589e
	I0728 18:45:56.764215    4673 command_runner.go:130] > c5e0cac22c05
	I0728 18:45:56.764218    4673 command_runner.go:130] > 9bd37faa2f0a
	I0728 18:45:56.764222    4673 command_runner.go:130] > 1e7d4787a9c3
	I0728 18:45:56.764225    4673 command_runner.go:130] > 9ebd1495f389
	I0728 18:45:56.765046    4673 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 18:45:56.777782    4673 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 18:45:56.785743    4673 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0728 18:45:56.785754    4673 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0728 18:45:56.785760    4673 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0728 18:45:56.785765    4673 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 18:45:56.785958    4673 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 18:45:56.785966    4673 kubeadm.go:157] found existing configuration files:
	
	I0728 18:45:56.786004    4673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 18:45:56.793624    4673 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0728 18:45:56.793639    4673 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0728 18:45:56.793681    4673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0728 18:45:56.801434    4673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 18:45:56.808929    4673 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0728 18:45:56.808944    4673 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0728 18:45:56.808980    4673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0728 18:45:56.816960    4673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 18:45:56.824507    4673 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0728 18:45:56.824525    4673 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0728 18:45:56.824561    4673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 18:45:56.832448    4673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 18:45:56.840091    4673 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0728 18:45:56.840107    4673 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0728 18:45:56.840137    4673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 18:45:56.847993    4673 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 18:45:56.855855    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:45:56.931374    4673 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0728 18:45:56.931387    4673 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0728 18:45:56.931392    4673 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0728 18:45:56.931397    4673 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0728 18:45:56.931404    4673 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0728 18:45:56.931410    4673 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0728 18:45:56.931415    4673 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0728 18:45:56.931421    4673 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0728 18:45:56.931426    4673 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0728 18:45:56.931432    4673 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0728 18:45:56.931437    4673 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0728 18:45:56.931441    4673 command_runner.go:130] > [certs] Using the existing "sa" key
	I0728 18:45:56.931458    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:45:56.972637    4673 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0728 18:45:57.092111    4673 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0728 18:45:57.430834    4673 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0728 18:45:57.545975    4673 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0728 18:45:57.694596    4673 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0728 18:45:57.837182    4673 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0728 18:45:57.839024    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:45:57.887965    4673 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0728 18:45:57.887980    4673 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0728 18:45:57.887985    4673 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0728 18:45:58.004235    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:45:58.063887    4673 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0728 18:45:58.063905    4673 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0728 18:45:58.066931    4673 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0728 18:45:58.070813    4673 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0728 18:45:58.072137    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:45:58.132428    4673 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0728 18:45:58.140407    4673 api_server.go:52] waiting for apiserver process to appear ...
	I0728 18:45:58.140471    4673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:45:58.641196    4673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:45:59.140593    4673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:45:59.153956    4673 command_runner.go:130] > 1742
	I0728 18:45:59.153999    4673 api_server.go:72] duration metric: took 1.013610274s to wait for apiserver process to appear ...
	I0728 18:45:59.154007    4673 api_server.go:88] waiting for apiserver healthz status ...
	I0728 18:45:59.154023    4673 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:46:01.283789    4673 api_server.go:279] https://192.169.0.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0728 18:46:01.283808    4673 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0728 18:46:01.283816    4673 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:46:01.329010    4673 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 18:46:01.329031    4673 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 18:46:01.655183    4673 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:46:01.660000    4673 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 18:46:01.660019    4673 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 18:46:02.154174    4673 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:46:02.157536    4673 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 18:46:02.157553    4673 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 18:46:02.655261    4673 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:46:02.659989    4673 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0728 18:46:02.660053    4673 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0728 18:46:02.660059    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:02.660066    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:02.660070    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:02.668512    4673 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0728 18:46:02.668524    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:02.668530    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:02.668533    4673 round_trippers.go:580]     Content-Length: 263
	I0728 18:46:02.668535    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:02 GMT
	I0728 18:46:02.668537    4673 round_trippers.go:580]     Audit-Id: 8f70f441-9df6-47ba-a3cc-867901aa7c72
	I0728 18:46:02.668539    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:02.668542    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:02.668549    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:02.668588    4673 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0728 18:46:02.668657    4673 api_server.go:141] control plane version: v1.30.3
	I0728 18:46:02.668669    4673 api_server.go:131] duration metric: took 3.514682856s to wait for apiserver health ...
	I0728 18:46:02.668676    4673 cni.go:84] Creating CNI manager for ""
	I0728 18:46:02.668680    4673 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0728 18:46:02.690995    4673 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0728 18:46:02.711028    4673 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0728 18:46:02.717331    4673 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0728 18:46:02.717346    4673 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0728 18:46:02.717351    4673 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0728 18:46:02.717356    4673 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0728 18:46:02.717365    4673 command_runner.go:130] > Access: 2024-07-29 01:45:49.171141326 +0000
	I0728 18:46:02.717370    4673 command_runner.go:130] > Modify: 2024-07-23 05:15:32.000000000 +0000
	I0728 18:46:02.717374    4673 command_runner.go:130] > Change: 2024-07-29 01:45:46.978185440 +0000
	I0728 18:46:02.717378    4673 command_runner.go:130] >  Birth: -
	I0728 18:46:02.717629    4673 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0728 18:46:02.717637    4673 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0728 18:46:02.735872    4673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0728 18:46:03.116876    4673 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0728 18:46:03.136590    4673 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0728 18:46:03.205885    4673 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0728 18:46:03.245561    4673 command_runner.go:130] > daemonset.apps/kindnet configured
	I0728 18:46:03.246956    4673 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 18:46:03.247010    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:46:03.247017    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.247025    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.247029    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.249490    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:03.249499    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.249504    4673 round_trippers.go:580]     Audit-Id: dd42f93b-27cd-4a41-b3a1-a670734a78af
	I0728 18:46:03.249508    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.249511    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.249514    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.249517    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.249519    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.250510    4673 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"846"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87605 chars]
	I0728 18:46:03.254552    4673 system_pods.go:59] 12 kube-system pods found
	I0728 18:46:03.254573    4673 system_pods.go:61] "coredns-7db6d8ff4d-8npcw" [a0fcbb6f-1182-4d9e-bc04-456f1b4de1db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0728 18:46:03.254579    4673 system_pods.go:61] "etcd-multinode-362000" [7b75e781-36f1-4f6f-99a4-808974571bcd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0728 18:46:03.254585    4673 system_pods.go:61] "kindnet-4mw5v" [053773ee-043a-48e0-9f70-411430b19acd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0728 18:46:03.254588    4673 system_pods.go:61] "kindnet-5dhhf" [e124802a-dbb6-4100-8c49-8a75ea05217a] Running
	I0728 18:46:03.254591    4673 system_pods.go:61] "kindnet-8hhwv" [487e32b7-7175-4187-89ba-90bb4d597681] Running
	I0728 18:46:03.254595    4673 system_pods.go:61] "kube-apiserver-multinode-362000" [95b0fc9b-aad1-47ad-ae00-439b4e4b905a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0728 18:46:03.254600    4673 system_pods.go:61] "kube-controller-manager-multinode-362000" [5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0728 18:46:03.254603    4673 system_pods.go:61] "kube-proxy-7gm24" [9db42267-b01f-40a3-bf21-c4d8cf6fb372] Running
	I0728 18:46:03.254606    4673 system_pods.go:61] "kube-proxy-dzz6p" [577d6ba2-e17a-426f-8315-1688766fa435] Running
	I0728 18:46:03.254610    4673 system_pods.go:61] "kube-proxy-tz5h5" [f791f783-464c-485b-9eda-97a5f857cca4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0728 18:46:03.254614    4673 system_pods.go:61] "kube-scheduler-multinode-362000" [0299d0c0-d45d-45ee-9b8e-b5900e92694b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 18:46:03.254618    4673 system_pods.go:61] "storage-provisioner" [9032906f-5102-4224-b894-d541cf7d67e7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0728 18:46:03.254623    4673 system_pods.go:74] duration metric: took 7.66063ms to wait for pod list to return data ...
	I0728 18:46:03.254629    4673 node_conditions.go:102] verifying NodePressure condition ...
	I0728 18:46:03.254667    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0728 18:46:03.254672    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.254677    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.254681    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.256449    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:03.256459    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.256467    4673 round_trippers.go:580]     Audit-Id: 662ec3c8-4097-484a-8e4a-fbb1205be3b7
	I0728 18:46:03.256472    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.256475    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.256481    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.256486    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.256495    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.256655    4673 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"846"},"items":[{"metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14303 chars]
	I0728 18:46:03.257221    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:46:03.257234    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:46:03.257244    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:46:03.257247    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:46:03.257251    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:46:03.257256    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:46:03.257260    4673 node_conditions.go:105] duration metric: took 2.627088ms to run NodePressure ...
	I0728 18:46:03.257272    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:46:03.476491    4673 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0728 18:46:03.560024    4673 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0728 18:46:03.561221    4673 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0728 18:46:03.561302    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0728 18:46:03.561314    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.561323    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.561329    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.564327    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:03.564345    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.564357    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.564366    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.564373    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.564377    4673 round_trippers.go:580]     Audit-Id: 5398763f-98bb-4d63-b62f-65eae8f2bf8c
	I0728 18:46:03.564383    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.564387    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.564706    4673 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"851"},"items":[{"metadata":{"name":"etcd-multinode-362000","namespace":"kube-system","uid":"7b75e781-36f1-4f6f-99a4-808974571bcd","resourceVersion":"835","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.mirror":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.seen":"2024-07-29T01:39:56.230156002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30912 chars]
	I0728 18:46:03.565447    4673 kubeadm.go:739] kubelet initialised
	I0728 18:46:03.565457    4673 kubeadm.go:740] duration metric: took 4.224667ms waiting for restarted kubelet to initialise ...
	I0728 18:46:03.565464    4673 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:46:03.565496    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:46:03.565501    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.565507    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.565512    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.567799    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:03.567810    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.567815    4673 round_trippers.go:580]     Audit-Id: 71e7cf77-43dd-4eba-83ad-aec1770533f7
	I0728 18:46:03.567818    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.567821    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.567824    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.567827    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.567829    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.569091    4673 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"851"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87012 chars]
	I0728 18:46:03.571083    4673 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:03.571138    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:03.571144    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.571150    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.571155    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.572865    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:03.572879    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.572885    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.572889    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.572893    4673 round_trippers.go:580]     Audit-Id: 303f0de4-e0fa-4af7-b2cf-e9f991463329
	I0728 18:46:03.572896    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.572915    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.572924    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.573039    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:03.573358    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:03.573366    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.573373    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.573379    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.575099    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:03.575116    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.575125    4673 round_trippers.go:580]     Audit-Id: 002d08be-b007-4e7e-9108-b8d1a891c201
	I0728 18:46:03.575129    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.575152    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.575162    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.575169    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.575173    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.575479    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:03.575792    4673 pod_ready.go:97] node "multinode-362000" hosting pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:03.575810    4673 pod_ready.go:81] duration metric: took 4.711453ms for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	E0728 18:46:03.575822    4673 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-362000" hosting pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:03.575835    4673 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:03.575896    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-362000
	I0728 18:46:03.575904    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.575913    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.575918    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.577693    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:03.577718    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.577725    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.577730    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.577733    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.577737    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.577740    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.577743    4673 round_trippers.go:580]     Audit-Id: caa7915d-a454-4b20-a4c7-f046a70c29ae
	I0728 18:46:03.577872    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-362000","namespace":"kube-system","uid":"7b75e781-36f1-4f6f-99a4-808974571bcd","resourceVersion":"835","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.mirror":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.seen":"2024-07-29T01:39:56.230156002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0728 18:46:03.578174    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:03.578182    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.578188    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.578193    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.579777    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:03.579794    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.579804    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.579810    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.579816    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.579822    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.579827    4673 round_trippers.go:580]     Audit-Id: 8388909f-28e5-41f0-9e2b-2accd82fdb2c
	I0728 18:46:03.579831    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.580001    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:03.580271    4673 pod_ready.go:97] node "multinode-362000" hosting pod "etcd-multinode-362000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:03.580284    4673 pod_ready.go:81] duration metric: took 4.441108ms for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	E0728 18:46:03.580292    4673 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-362000" hosting pod "etcd-multinode-362000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:03.580305    4673 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:03.580345    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-362000
	I0728 18:46:03.580351    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.580357    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.580361    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.582253    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:03.582265    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.582270    4673 round_trippers.go:580]     Audit-Id: ff2b1fa2-01db-45c0-9dde-77f359073a3e
	I0728 18:46:03.582274    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.582278    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.582281    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.582284    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.582287    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.582386    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-362000","namespace":"kube-system","uid":"95b0fc9b-aad1-47ad-ae00-439b4e4b905a","resourceVersion":"838","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.mirror":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.seen":"2024-07-29T01:39:56.230158669Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8135 chars]
	I0728 18:46:03.582697    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:03.582706    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.582712    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.582716    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.584391    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:03.584402    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.584408    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.584411    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.584414    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.584417    4673 round_trippers.go:580]     Audit-Id: 70157737-48ef-440c-a2fa-d76a7118783f
	I0728 18:46:03.584419    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.584422    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.584882    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:03.585118    4673 pod_ready.go:97] node "multinode-362000" hosting pod "kube-apiserver-multinode-362000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:03.585129    4673 pod_ready.go:81] duration metric: took 4.817707ms for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	E0728 18:46:03.585136    4673 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-362000" hosting pod "kube-apiserver-multinode-362000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:03.585144    4673 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:03.585187    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-362000
	I0728 18:46:03.585192    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.585197    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.585202    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.587217    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:03.587230    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.587235    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.587238    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.587240    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.587243    4673 round_trippers.go:580]     Audit-Id: 46934f17-f1e2-4937-8162-9c93621655cb
	I0728 18:46:03.587245    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.587248    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.587348    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-362000","namespace":"kube-system","uid":"5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9","resourceVersion":"839","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.mirror":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.seen":"2024-07-29T01:39:56.230159555Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7726 chars]
	I0728 18:46:03.647355    4673 request.go:629] Waited for 59.673173ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:03.647406    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:03.647415    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.647426    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.647434    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.649728    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:03.649739    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.649746    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.649751    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.649755    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.649758    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.649761    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.649764    4673 round_trippers.go:580]     Audit-Id: 2c3f7e32-6a26-47e9-8afc-4ce7375e35c5
	I0728 18:46:03.650362    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:03.650555    4673 pod_ready.go:97] node "multinode-362000" hosting pod "kube-controller-manager-multinode-362000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:03.650569    4673 pod_ready.go:81] duration metric: took 65.419076ms for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	E0728 18:46:03.650576    4673 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-362000" hosting pod "kube-controller-manager-multinode-362000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:03.650582    4673 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7gm24" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:03.848587    4673 request.go:629] Waited for 197.964405ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7gm24
	I0728 18:46:03.848742    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7gm24
	I0728 18:46:03.848753    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:03.848764    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:03.848770    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:03.851206    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:03.851227    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:03.851237    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:03.851246    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:03.851251    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:03.851256    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:03 GMT
	I0728 18:46:03.851259    4673 round_trippers.go:580]     Audit-Id: a8aed1d7-0eef-4626-9dc9-e26aba8bade3
	I0728 18:46:03.851264    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:03.851461    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7gm24","generateName":"kube-proxy-","namespace":"kube-system","uid":"9db42267-b01f-40a3-bf21-c4d8cf6fb372","resourceVersion":"791","creationTimestamp":"2024-07-29T01:44:55Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0728 18:46:04.048596    4673 request.go:629] Waited for 196.805459ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m03
	I0728 18:46:04.048724    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m03
	I0728 18:46:04.048735    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:04.048746    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:04.048752    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:04.050870    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:04.050881    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:04.050887    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:04.050891    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:04.050896    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:04.050900    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:04.050904    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:04 GMT
	I0728 18:46:04.050908    4673 round_trippers.go:580]     Audit-Id: ddb19336-96a6-40ce-8e69-2f220c6f258b
	I0728 18:46:04.051004    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m03","uid":"f2047331-d0da-470e-8da5-7b725a7d5c49","resourceVersion":"818","creationTimestamp":"2024-07-29T01:44:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_44_56_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3142 chars]
	I0728 18:46:04.051199    4673 pod_ready.go:92] pod "kube-proxy-7gm24" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:04.051211    4673 pod_ready.go:81] duration metric: took 400.625478ms for pod "kube-proxy-7gm24" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:04.051219    4673 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dzz6p" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:04.248383    4673 request.go:629] Waited for 197.050186ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzz6p
	I0728 18:46:04.248439    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzz6p
	I0728 18:46:04.248447    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:04.248458    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:04.248467    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:04.251006    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:04.251018    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:04.251025    4673 round_trippers.go:580]     Audit-Id: d41e57d3-dc4f-4a37-ae68-f60ee45146ec
	I0728 18:46:04.251030    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:04.251036    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:04.251041    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:04.251045    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:04.251048    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:04 GMT
	I0728 18:46:04.251220    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dzz6p","generateName":"kube-proxy-","namespace":"kube-system","uid":"577d6ba2-e17a-426f-8315-1688766fa435","resourceVersion":"488","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0728 18:46:04.447854    4673 request.go:629] Waited for 196.288477ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:46:04.447906    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:46:04.447916    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:04.447927    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:04.447932    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:04.450364    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:04.450377    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:04.450384    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:04.450390    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:04.450394    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:04.450398    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:04 GMT
	I0728 18:46:04.450401    4673 round_trippers.go:580]     Audit-Id: 7a71e646-7769-4690-abb8-a1fc8004ec92
	I0728 18:46:04.450404    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:04.450731    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"552","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0728 18:46:04.450951    4673 pod_ready.go:92] pod "kube-proxy-dzz6p" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:04.450964    4673 pod_ready.go:81] duration metric: took 399.741092ms for pod "kube-proxy-dzz6p" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:04.450973    4673 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:04.648036    4673 request.go:629] Waited for 196.965047ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:46:04.648205    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:46:04.648219    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:04.648231    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:04.648240    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:04.650941    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:04.650955    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:04.650964    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:04.650968    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:04 GMT
	I0728 18:46:04.650971    4673 round_trippers.go:580]     Audit-Id: 4c8bfa6a-8729-46af-88f9-50944792e7f9
	I0728 18:46:04.650975    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:04.650978    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:04.650982    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:04.651048    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tz5h5","generateName":"kube-proxy-","namespace":"kube-system","uid":"f791f783-464c-485b-9eda-97a5f857cca4","resourceVersion":"848","creationTimestamp":"2024-07-29T01:40:09Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0728 18:46:04.847040    4673 request.go:629] Waited for 195.669089ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:04.847073    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:04.847078    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:04.847118    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:04.847125    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:04.848826    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:04.848836    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:04.848844    4673 round_trippers.go:580]     Audit-Id: 8509a86f-61ec-49c5-bf04-5a95d1f2faeb
	I0728 18:46:04.848848    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:04.848851    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:04.848865    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:04.848872    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:04.848876    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:04 GMT
	I0728 18:46:04.848962    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:04.849147    4673 pod_ready.go:97] node "multinode-362000" hosting pod "kube-proxy-tz5h5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:04.849158    4673 pod_ready.go:81] duration metric: took 398.181075ms for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	E0728 18:46:04.849164    4673 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-362000" hosting pod "kube-proxy-tz5h5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:04.849169    4673 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:05.048177    4673 request.go:629] Waited for 198.951574ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:46:05.048369    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:46:05.048380    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:05.048391    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:05.048398    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:05.051192    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:05.051214    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:05.051222    4673 round_trippers.go:580]     Audit-Id: bf0f2a0e-9e62-4bce-9dd6-d7e45a1792ae
	I0728 18:46:05.051225    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:05.051229    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:05.051234    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:05.051238    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:05.051241    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:05 GMT
	I0728 18:46:05.051520    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-362000","namespace":"kube-system","uid":"0299d0c0-d45d-45ee-9b8e-b5900e92694b","resourceVersion":"834","creationTimestamp":"2024-07-29T01:39:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.mirror":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.seen":"2024-07-29T01:39:50.867492603Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5438 chars]
	I0728 18:46:05.248795    4673 request.go:629] Waited for 196.950351ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:05.248895    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:05.248904    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:05.248915    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:05.248924    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:05.251844    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:05.251859    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:05.251866    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:05.251872    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:05.251876    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:05.251880    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:05 GMT
	I0728 18:46:05.251883    4673 round_trippers.go:580]     Audit-Id: fd53ac52-36c8-4a36-9c98-1e5e3bfbc51a
	I0728 18:46:05.251887    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:05.252200    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:05.252455    4673 pod_ready.go:97] node "multinode-362000" hosting pod "kube-scheduler-multinode-362000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:05.252472    4673 pod_ready.go:81] duration metric: took 403.300338ms for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	E0728 18:46:05.252482    4673 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-362000" hosting pod "kube-scheduler-multinode-362000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000" has status "Ready":"False"
	I0728 18:46:05.252489    4673 pod_ready.go:38] duration metric: took 1.687030242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:46:05.252503    4673 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 18:46:05.263413    4673 command_runner.go:130] > -16
	I0728 18:46:05.263565    4673 ops.go:34] apiserver oom_adj: -16
	I0728 18:46:05.263572    4673 kubeadm.go:597] duration metric: took 8.547706097s to restartPrimaryControlPlane
	I0728 18:46:05.263578    4673 kubeadm.go:394] duration metric: took 8.568533174s to StartCluster
	I0728 18:46:05.263587    4673 settings.go:142] acquiring lock: {Name:mk9218fe520c81adf28e6207ae402102e10a5d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:46:05.263676    4673 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:46:05.264048    4673 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/kubeconfig: {Name:mk76ac5b4283108fca1a66cc5cd0791fbea0691d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:46:05.264314    4673 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:46:05.264327    4673 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0728 18:46:05.264447    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:05.308225    4673 out.go:177] * Verifying Kubernetes components...
	I0728 18:46:05.352178    4673 out.go:177] * Enabled addons: 
	I0728 18:46:05.373489    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:46:05.394137    4673 addons.go:510] duration metric: took 129.814599ms for enable addons: enabled=[]
	I0728 18:46:05.530364    4673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:46:05.542856    4673 node_ready.go:35] waiting up to 6m0s for node "multinode-362000" to be "Ready" ...
	I0728 18:46:05.542913    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:05.542919    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:05.542925    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:05.542928    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:05.544173    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:05.544182    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:05.544213    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:05.544218    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:05.544225    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:05 GMT
	I0728 18:46:05.544230    4673 round_trippers.go:580]     Audit-Id: 66ef8f37-b5be-468d-b667-dbe16d791ac7
	I0728 18:46:05.544235    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:05.544240    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:05.544353    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:06.045063    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:06.045088    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:06.045193    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:06.045205    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:06.047605    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:06.047617    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:06.047624    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:06 GMT
	I0728 18:46:06.047656    4673 round_trippers.go:580]     Audit-Id: 9818c57c-bafc-44d6-aa00-dbbe6b602d92
	I0728 18:46:06.047664    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:06.047669    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:06.047672    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:06.047676    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:06.047962    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:06.543394    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:06.543422    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:06.543434    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:06.543447    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:06.546471    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:06.546488    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:06.546496    4673 round_trippers.go:580]     Audit-Id: 52a69848-4d8a-4a54-9897-b751d38ecd7e
	I0728 18:46:06.546508    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:06.546513    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:06.546518    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:06.546522    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:06.546525    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:06 GMT
	I0728 18:46:06.546626    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:07.045034    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:07.045060    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:07.045071    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:07.045080    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:07.048063    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:07.048078    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:07.048086    4673 round_trippers.go:580]     Audit-Id: 00ba8ff8-1b8a-42a3-93d1-01013382ba46
	I0728 18:46:07.048091    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:07.048094    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:07.048098    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:07.048101    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:07.048104    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:07 GMT
	I0728 18:46:07.048177    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:07.542976    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:07.542992    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:07.543001    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:07.543034    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:07.545070    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:07.545081    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:07.545097    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:07.545107    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:07.545114    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:07 GMT
	I0728 18:46:07.545118    4673 round_trippers.go:580]     Audit-Id: a7499019-f86f-4dbe-bd14-355c8cb89d10
	I0728 18:46:07.545123    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:07.545125    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:07.545268    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:07.545471    4673 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:46:08.045026    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:08.045053    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:08.045064    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:08.045072    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:08.047835    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:08.047851    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:08.047859    4673 round_trippers.go:580]     Audit-Id: fba0841d-ff46-4a3e-b939-14742d3a686e
	I0728 18:46:08.047863    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:08.047866    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:08.047869    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:08.047873    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:08.047877    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:08 GMT
	I0728 18:46:08.047960    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:08.544997    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:08.545024    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:08.545036    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:08.545041    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:08.547615    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:08.547630    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:08.547637    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:08.547641    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:08.547645    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:08.547649    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:08 GMT
	I0728 18:46:08.547652    4673 round_trippers.go:580]     Audit-Id: 164f8393-9fb5-4806-9c52-38422b7a7b30
	I0728 18:46:08.547657    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:08.547719    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:09.045022    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:09.045062    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:09.045074    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:09.045080    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:09.047760    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:09.047776    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:09.047783    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:09 GMT
	I0728 18:46:09.047787    4673 round_trippers.go:580]     Audit-Id: ebc0b325-15ad-4407-8d3e-743ff9541e92
	I0728 18:46:09.047791    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:09.047796    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:09.047799    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:09.047803    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:09.047997    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:09.544321    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:09.544349    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:09.544395    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:09.544404    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:09.547091    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:09.547106    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:09.547113    4673 round_trippers.go:580]     Audit-Id: d43958a7-5d3d-48bf-ba82-ef8580d5b782
	I0728 18:46:09.547117    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:09.547121    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:09.547124    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:09.547128    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:09.547131    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:09 GMT
	I0728 18:46:09.547395    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:09.547642    4673 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:46:10.044999    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:10.045013    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:10.045018    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:10.045021    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:10.046893    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:10.046917    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:10.046934    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:10.046942    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:10.046953    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:10 GMT
	I0728 18:46:10.046976    4673 round_trippers.go:580]     Audit-Id: feeb560d-fd24-4434-b96e-9fe8fa976c83
	I0728 18:46:10.046983    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:10.046987    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:10.047155    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:10.543294    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:10.543320    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:10.543332    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:10.543338    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:10.546065    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:10.546079    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:10.546086    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:10.546090    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:10.546093    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:10.546095    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:10 GMT
	I0728 18:46:10.546099    4673 round_trippers.go:580]     Audit-Id: db3dc4e2-d90d-4816-aba5-bce00fdddf97
	I0728 18:46:10.546102    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:10.546179    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:11.045037    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:11.045064    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:11.045076    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:11.045081    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:11.047659    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:11.047676    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:11.047686    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:11.047693    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:11.047699    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:11.047706    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:11.047711    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:11 GMT
	I0728 18:46:11.047718    4673 round_trippers.go:580]     Audit-Id: c9186c44-34f5-4dd1-b086-2c827930ebc5
	I0728 18:46:11.047877    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:11.543581    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:11.543606    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:11.543618    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:11.543631    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:11.546284    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:11.546298    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:11.546305    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:11.546310    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:11 GMT
	I0728 18:46:11.546313    4673 round_trippers.go:580]     Audit-Id: 406c9862-f9db-46fc-a80a-e05dc2cf11a8
	I0728 18:46:11.546317    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:11.546321    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:11.546324    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:11.546458    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:12.045001    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:12.045030    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:12.045041    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:12.045048    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:12.048347    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:12.048363    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:12.048370    4673 round_trippers.go:580]     Audit-Id: 4bb1c035-73c4-4e29-bc62-41b55a590965
	I0728 18:46:12.048374    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:12.048390    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:12.048395    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:12.048400    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:12.048405    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:12 GMT
	I0728 18:46:12.048488    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:12.048729    4673 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:46:12.542964    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:12.542981    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:12.543046    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:12.543052    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:12.545156    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:12.545166    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:12.545171    4673 round_trippers.go:580]     Audit-Id: ff5f0769-b432-4e44-ac9e-4fa1719357f5
	I0728 18:46:12.545175    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:12.545178    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:12.545182    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:12.545185    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:12.545188    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:12 GMT
	I0728 18:46:12.545411    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:13.044062    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:13.044089    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:13.044182    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:13.044190    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:13.046868    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:13.046883    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:13.046894    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:13.046903    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:13.046914    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:13 GMT
	I0728 18:46:13.046920    4673 round_trippers.go:580]     Audit-Id: c683fcdd-13f9-4ea4-9cee-0f3ac197efb2
	I0728 18:46:13.046923    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:13.046926    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:13.047256    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:13.542932    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:13.542998    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:13.543006    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:13.543010    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:13.544665    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:13.544688    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:13.544700    4673 round_trippers.go:580]     Audit-Id: c94d2f3f-ca72-4339-9b49-02f96670c69c
	I0728 18:46:13.544721    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:13.544728    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:13.544732    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:13.544778    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:13.544785    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:13 GMT
	I0728 18:46:13.544832    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"832","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0728 18:46:14.043459    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:14.043485    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:14.043577    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:14.043587    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:14.045897    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:14.045910    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:14.045917    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:14 GMT
	I0728 18:46:14.045922    4673 round_trippers.go:580]     Audit-Id: cc6159a8-d060-4bc8-9987-6613ff0cb383
	I0728 18:46:14.045926    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:14.045930    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:14.045934    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:14.045953    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:14.046215    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:14.544073    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:14.544102    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:14.544114    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:14.544201    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:14.547148    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:14.547167    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:14.547178    4673 round_trippers.go:580]     Audit-Id: 7b5e91a8-420b-4520-9a79-0e253be262cb
	I0728 18:46:14.547185    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:14.547201    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:14.547208    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:14.547213    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:14.547217    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:14 GMT
	I0728 18:46:14.547504    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:14.547766    4673 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:46:15.044502    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:15.044530    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:15.044543    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:15.044551    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:15.047500    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:15.047514    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:15.047521    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:15 GMT
	I0728 18:46:15.047527    4673 round_trippers.go:580]     Audit-Id: f7fe108e-d3e8-4a4b-9795-95c1dcb8cdd2
	I0728 18:46:15.047532    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:15.047539    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:15.047545    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:15.047551    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:15.047655    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:15.543085    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:15.543105    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:15.543113    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:15.543122    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:15.544978    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:15.544987    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:15.544991    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:15.544995    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:15.544998    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:15.545000    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:15 GMT
	I0728 18:46:15.545003    4673 round_trippers.go:580]     Audit-Id: bcbbb740-f897-4438-bb14-d4489110f159
	I0728 18:46:15.545007    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:15.545130    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:16.045034    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:16.045071    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:16.045116    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:16.045125    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:16.047934    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:16.047952    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:16.047963    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:16 GMT
	I0728 18:46:16.047971    4673 round_trippers.go:580]     Audit-Id: 42350baf-ccaf-4bed-a159-5420db3fe12b
	I0728 18:46:16.047978    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:16.047982    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:16.047986    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:16.047989    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:16.048124    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:16.543832    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:16.543853    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:16.543861    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:16.543864    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:16.545777    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:16.545792    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:16.545801    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:16.545807    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:16.545811    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:16.545815    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:16.545818    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:16 GMT
	I0728 18:46:16.545822    4673 round_trippers.go:580]     Audit-Id: adc6d060-417c-4d7c-b414-131fcc6c1c96
	I0728 18:46:16.546040    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:17.043321    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:17.043347    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:17.043441    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:17.043451    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:17.046058    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:17.046073    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:17.046081    4673 round_trippers.go:580]     Audit-Id: e41093a7-258f-49d1-93f6-7f6fe0f09aa3
	I0728 18:46:17.046085    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:17.046088    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:17.046092    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:17.046096    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:17.046099    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:17 GMT
	I0728 18:46:17.046247    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:17.046497    4673 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:46:17.543916    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:17.543943    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:17.543957    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:17.543965    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:17.546752    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:17.546767    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:17.546774    4673 round_trippers.go:580]     Audit-Id: e7f82c3f-840a-49fa-aa8b-00d2a86a7d20
	I0728 18:46:17.546780    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:17.546784    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:17.546787    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:17.546790    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:17.546794    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:17 GMT
	I0728 18:46:17.547105    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:18.043703    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:18.043723    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:18.043731    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:18.043791    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:18.045772    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:18.045796    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:18.045810    4673 round_trippers.go:580]     Audit-Id: ecc25900-df6d-4446-b249-c76fa67dcd39
	I0728 18:46:18.045816    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:18.045825    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:18.045831    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:18.045835    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:18.045840    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:18 GMT
	I0728 18:46:18.045937    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:18.544010    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:18.544040    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:18.544052    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:18.544059    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:18.546600    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:18.546616    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:18.546625    4673 round_trippers.go:580]     Audit-Id: 0b58c9b9-9688-49a3-ad30-7a5c2d538759
	I0728 18:46:18.546630    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:18.546636    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:18.546641    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:18.546645    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:18.546650    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:18 GMT
	I0728 18:46:18.546707    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:19.042938    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:19.042965    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:19.042976    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:19.042982    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:19.045332    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:19.045340    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:19.045345    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:19 GMT
	I0728 18:46:19.045348    4673 round_trippers.go:580]     Audit-Id: c598eca3-7a23-438d-9341-9d98f07cedfe
	I0728 18:46:19.045350    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:19.045352    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:19.045361    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:19.045366    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:19.045630    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:19.544343    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:19.544371    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:19.544461    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:19.544473    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:19.546997    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:19.547010    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:19.547017    4673 round_trippers.go:580]     Audit-Id: 9166962d-daa5-425b-a6d4-09359cea1a45
	I0728 18:46:19.547021    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:19.547026    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:19.547029    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:19.547034    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:19.547038    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:19 GMT
	I0728 18:46:19.547331    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:19.547581    4673 node_ready.go:53] node "multinode-362000" has status "Ready":"False"
	I0728 18:46:20.044349    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:20.044373    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:20.044384    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:20.044390    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:20.046974    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:20.046987    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:20.046994    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:20.047001    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:20 GMT
	I0728 18:46:20.047007    4673 round_trippers.go:580]     Audit-Id: 2ff15e4a-528c-4f84-80b8-e1b7a73a838f
	I0728 18:46:20.047012    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:20.047018    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:20.047023    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:20.047478    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:20.542912    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:20.542939    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:20.542948    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:20.542953    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:20.545341    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:20.545353    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:20.545359    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:20 GMT
	I0728 18:46:20.545364    4673 round_trippers.go:580]     Audit-Id: 64b92245-bfe4-4339-bf04-c3a08894fd2e
	I0728 18:46:20.545369    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:20.545374    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:20.545378    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:20.545382    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:20.545554    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:21.043514    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:21.043545    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:21.043606    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:21.043618    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:21.046644    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:21.046658    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:21.046665    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:21.046670    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:21 GMT
	I0728 18:46:21.046673    4673 round_trippers.go:580]     Audit-Id: 156a8527-9325-47dd-be01-940ee9577457
	I0728 18:46:21.046676    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:21.046681    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:21.046683    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:21.046773    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:21.544893    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:21.544917    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:21.544925    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:21.544932    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:21.547195    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:21.547207    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:21.547212    4673 round_trippers.go:580]     Audit-Id: 1a1d8322-6fe7-420f-96f6-20f97811bff9
	I0728 18:46:21.547215    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:21.547218    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:21.547220    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:21.547223    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:21.547226    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:21 GMT
	I0728 18:46:21.547272    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"959","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0728 18:46:22.043695    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:22.043722    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:22.043734    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:22.043739    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:22.046854    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:22.046873    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:22.046883    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:22 GMT
	I0728 18:46:22.046888    4673 round_trippers.go:580]     Audit-Id: f6098c3a-d83f-49f6-95a2-2d2ab872a960
	I0728 18:46:22.046893    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:22.046898    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:22.046903    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:22.046909    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:22.047092    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"977","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0728 18:46:22.047354    4673 node_ready.go:49] node "multinode-362000" has status "Ready":"True"
	I0728 18:46:22.047370    4673 node_ready.go:38] duration metric: took 16.504612091s for node "multinode-362000" to be "Ready" ...
	I0728 18:46:22.047378    4673 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:46:22.047429    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:46:22.047437    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:22.047445    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:22.047450    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:22.050643    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:22.050651    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:22.050656    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:22 GMT
	I0728 18:46:22.050659    4673 round_trippers.go:580]     Audit-Id: fa55cad0-b4a0-4db3-b378-422236819354
	I0728 18:46:22.050662    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:22.050664    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:22.050667    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:22.050670    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:22.051145    4673 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"979"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 86038 chars]
	I0728 18:46:22.052933    4673 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:22.052975    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:22.052979    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:22.052985    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:22.052988    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:22.054355    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:22.054363    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:22.054370    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:22.054377    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:22.054384    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:22 GMT
	I0728 18:46:22.054391    4673 round_trippers.go:580]     Audit-Id: bd91cc36-2b77-40a4-8b32-409126ce244b
	I0728 18:46:22.054395    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:22.054398    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:22.054542    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:22.054767    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:22.054774    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:22.054779    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:22.054783    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:22.055806    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:22.055813    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:22.055818    4673 round_trippers.go:580]     Audit-Id: 3e3e627c-ffec-47fb-a34b-cb5ff0d6669c
	I0728 18:46:22.055823    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:22.055826    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:22.055829    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:22.055831    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:22.055833    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:22 GMT
	I0728 18:46:22.056066    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"977","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0728 18:46:22.554071    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:22.554097    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:22.554145    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:22.554156    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:22.556458    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:22.556469    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:22.556476    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:22 GMT
	I0728 18:46:22.556481    4673 round_trippers.go:580]     Audit-Id: b030f475-f46c-43b2-8772-36bcbd61b75f
	I0728 18:46:22.556485    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:22.556489    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:22.556495    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:22.556502    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:22.556735    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:22.557097    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:22.557107    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:22.557115    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:22.557120    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:22.558345    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:22.558352    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:22.558357    4673 round_trippers.go:580]     Audit-Id: 97147439-01b2-480c-b13c-f913be98c3b8
	I0728 18:46:22.558360    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:22.558386    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:22.558394    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:22.558398    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:22.558401    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:22 GMT
	I0728 18:46:22.558548    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"977","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0728 18:46:23.053402    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:23.053422    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:23.053430    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:23.053434    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:23.055689    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:23.055698    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:23.055709    4673 round_trippers.go:580]     Audit-Id: 491a4ccd-431a-4ad9-9d73-d3a7074f9904
	I0728 18:46:23.055712    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:23.055714    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:23.055717    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:23.055719    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:23.055722    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:23 GMT
	I0728 18:46:23.055924    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:23.056254    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:23.056261    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:23.056270    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:23.056275    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:23.057415    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:23.057425    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:23.057432    4673 round_trippers.go:580]     Audit-Id: 8f0e602b-4ea6-42ef-a5b9-b6c7d880f2c7
	I0728 18:46:23.057436    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:23.057440    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:23.057446    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:23.057449    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:23.057451    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:23 GMT
	I0728 18:46:23.057592    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"977","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0728 18:46:23.554711    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:23.554733    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:23.554745    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:23.554759    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:23.557957    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:23.557970    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:23.557977    4673 round_trippers.go:580]     Audit-Id: 717f925b-80d5-4626-84b9-606a908e4e27
	I0728 18:46:23.557985    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:23.557990    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:23.557996    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:23.558005    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:23.558008    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:23 GMT
	I0728 18:46:23.558567    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:23.558839    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:23.558846    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:23.558852    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:23.558856    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:23.560057    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:23.560064    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:23.560069    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:23 GMT
	I0728 18:46:23.560074    4673 round_trippers.go:580]     Audit-Id: db27b253-9c23-4534-9325-e325e18fc3d5
	I0728 18:46:23.560076    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:23.560078    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:23.560081    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:23.560085    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:23.560237    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"977","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0728 18:46:24.054520    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:24.054542    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:24.054552    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:24.054557    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:24.057200    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:24.057213    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:24.057224    4673 round_trippers.go:580]     Audit-Id: 17c96eab-dffc-4441-b9ee-1dd665695d72
	I0728 18:46:24.057233    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:24.057241    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:24.057247    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:24.057252    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:24.057258    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:24 GMT
	I0728 18:46:24.057613    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:24.057994    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:24.058004    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:24.058011    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:24.058017    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:24.059280    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:24.059291    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:24.059298    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:24 GMT
	I0728 18:46:24.059304    4673 round_trippers.go:580]     Audit-Id: 5663cc51-fa18-4198-b64e-c612f733851c
	I0728 18:46:24.059310    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:24.059316    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:24.059320    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:24.059324    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:24.059465    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:24.059632    4673 pod_ready.go:102] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"False"
	I0728 18:46:24.553738    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:24.553763    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:24.553775    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:24.553781    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:24.556893    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:24.556909    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:24.556917    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:24.556921    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:24 GMT
	I0728 18:46:24.556925    4673 round_trippers.go:580]     Audit-Id: 9bab4c2d-b47a-4697-b09d-5c325f3feecc
	I0728 18:46:24.556928    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:24.556931    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:24.556935    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:24.557111    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:24.557472    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:24.557482    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:24.557490    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:24.557495    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:24.559105    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:24.559115    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:24.559122    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:24.559140    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:24.559151    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:24.559154    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:24.559158    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:24 GMT
	I0728 18:46:24.559162    4673 round_trippers.go:580]     Audit-Id: 0cf3c394-70e7-4dff-aeeb-deb2dfb8026a
	I0728 18:46:24.559247    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:25.053655    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:25.053679    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:25.053691    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:25.053700    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:25.056720    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:25.056734    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:25.056742    4673 round_trippers.go:580]     Audit-Id: ad6aff22-1872-40c7-ab07-f98be348c2a5
	I0728 18:46:25.056747    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:25.056752    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:25.056755    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:25.056778    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:25.056787    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:25 GMT
	I0728 18:46:25.056887    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:25.057257    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:25.057267    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:25.057275    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:25.057279    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:25.058603    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:25.058612    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:25.058617    4673 round_trippers.go:580]     Audit-Id: 1e5b5f2e-2153-4e8a-9193-57881f898e21
	I0728 18:46:25.058619    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:25.058621    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:25.058624    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:25.058627    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:25.058629    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:25 GMT
	I0728 18:46:25.058699    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:25.555099    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:25.555207    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:25.555221    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:25.555230    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:25.557854    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:25.557868    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:25.557876    4673 round_trippers.go:580]     Audit-Id: 9384fbe9-b470-4854-8550-7024491f3972
	I0728 18:46:25.557880    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:25.557887    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:25.557892    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:25.557914    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:25.557922    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:25 GMT
	I0728 18:46:25.558059    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:25.558438    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:25.558447    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:25.558456    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:25.558460    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:25.560004    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:25.560014    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:25.560019    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:25 GMT
	I0728 18:46:25.560022    4673 round_trippers.go:580]     Audit-Id: fb41d6a2-1911-416d-a270-4d454e97ad25
	I0728 18:46:25.560025    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:25.560028    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:25.560031    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:25.560034    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:25.560103    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:26.053984    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:26.054008    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:26.054019    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:26.054025    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:26.056944    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:26.056960    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:26.056967    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:26.056972    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:26.056997    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:26 GMT
	I0728 18:46:26.057012    4673 round_trippers.go:580]     Audit-Id: 7588d8da-bacc-4c81-bfe2-ebe25cc09f3d
	I0728 18:46:26.057019    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:26.057024    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:26.057351    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:26.057725    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:26.057736    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:26.057744    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:26.057748    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:26.059216    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:26.059227    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:26.059232    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:26.059236    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:26 GMT
	I0728 18:46:26.059239    4673 round_trippers.go:580]     Audit-Id: 8f9da684-b57b-439e-ac10-be76459af05b
	I0728 18:46:26.059242    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:26.059246    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:26.059249    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:26.059305    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:26.554231    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:26.554247    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:26.554253    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:26.554257    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:26.555906    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:26.555917    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:26.555922    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:26.555925    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:26 GMT
	I0728 18:46:26.555928    4673 round_trippers.go:580]     Audit-Id: 15c156d6-277e-4b52-ad65-7a9340a270c1
	I0728 18:46:26.555930    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:26.555940    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:26.555944    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:26.556055    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:26.556328    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:26.556335    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:26.556341    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:26.556344    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:26.557440    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:26.557448    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:26.557453    4673 round_trippers.go:580]     Audit-Id: db74b62a-5a02-4079-bf52-19c4202782da
	I0728 18:46:26.557456    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:26.557459    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:26.557461    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:26.557463    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:26.557466    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:26 GMT
	I0728 18:46:26.557523    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:26.557688    4673 pod_ready.go:102] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"False"
	I0728 18:46:27.053477    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:27.053503    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:27.053515    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:27.053524    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:27.056453    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:27.056470    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:27.056478    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:27.056482    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:27 GMT
	I0728 18:46:27.056496    4673 round_trippers.go:580]     Audit-Id: bd8354b4-9607-4a9f-b2bd-21c1e0cb9963
	I0728 18:46:27.056502    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:27.056507    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:27.056510    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:27.056582    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:27.056941    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:27.056950    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:27.056958    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:27.056962    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:27.058137    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:27.058143    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:27.058148    4673 round_trippers.go:580]     Audit-Id: ef1b79bd-a3ec-4c52-8da5-a52b3e48e6c4
	I0728 18:46:27.058151    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:27.058155    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:27.058157    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:27.058160    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:27.058163    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:27 GMT
	I0728 18:46:27.058232    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:27.554641    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:27.554665    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:27.554677    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:27.554685    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:27.557542    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:27.557577    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:27.557627    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:27.557639    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:27.557644    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:27 GMT
	I0728 18:46:27.557649    4673 round_trippers.go:580]     Audit-Id: 98fa8e01-106c-4f9a-8071-9062e7046442
	I0728 18:46:27.557653    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:27.557657    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:27.557749    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:27.558101    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:27.558117    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:27.558125    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:27.558128    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:27.559585    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:27.559595    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:27.559600    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:27.559604    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:27.559618    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:27.559626    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:27.559629    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:27 GMT
	I0728 18:46:27.559632    4673 round_trippers.go:580]     Audit-Id: 30595a70-2a97-4d70-9a57-53950c643d7c
	I0728 18:46:27.559729    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:28.053340    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:28.053364    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:28.053375    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:28.053380    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:28.056222    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:28.056251    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:28.056286    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:28.056301    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:28.056317    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:28.056321    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:28 GMT
	I0728 18:46:28.056324    4673 round_trippers.go:580]     Audit-Id: 3673f36a-ebda-425c-866a-bf425359b217
	I0728 18:46:28.056329    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:28.056421    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:28.056783    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:28.056793    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:28.056801    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:28.056807    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:28.058118    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:28.058127    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:28.058132    4673 round_trippers.go:580]     Audit-Id: c3041389-cdd8-440d-9b50-8f38a62bedfe
	I0728 18:46:28.058148    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:28.058154    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:28.058160    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:28.058165    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:28.058169    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:28 GMT
	I0728 18:46:28.058236    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:28.554811    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:28.554836    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:28.554891    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:28.554900    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:28.557398    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:28.557410    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:28.557418    4673 round_trippers.go:580]     Audit-Id: 946aa63a-5b25-418d-bd8f-cdd3170e04c1
	I0728 18:46:28.557423    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:28.557428    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:28.557433    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:28.557440    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:28.557448    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:28 GMT
	I0728 18:46:28.557737    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:28.558090    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:28.558100    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:28.558110    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:28.558114    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:28.559676    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:28.559683    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:28.559688    4673 round_trippers.go:580]     Audit-Id: 46fbf5c9-1fdc-44e1-9cfa-b6c0a3ffac88
	I0728 18:46:28.559691    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:28.559694    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:28.559698    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:28.559701    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:28.559703    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:28 GMT
	I0728 18:46:28.559987    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:28.560161    4673 pod_ready.go:102] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"False"
	I0728 18:46:29.053744    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:29.053767    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:29.053778    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:29.053785    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:29.056546    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:29.056558    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:29.056564    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:29.056569    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:29.056574    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:29 GMT
	I0728 18:46:29.056580    4673 round_trippers.go:580]     Audit-Id: 0d70e27a-e492-438a-b910-b79155503968
	I0728 18:46:29.056587    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:29.056591    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:29.056671    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:29.057029    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:29.057038    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:29.057047    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:29.057050    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:29.058440    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:29.058449    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:29.058454    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:29.058457    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:29.058461    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:29.058463    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:29 GMT
	I0728 18:46:29.058466    4673 round_trippers.go:580]     Audit-Id: 82784609-d28e-44b7-8b59-36ee33cd266d
	I0728 18:46:29.058470    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:29.058524    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:29.553102    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:29.553122    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:29.553131    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:29.553137    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:29.555204    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:29.555214    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:29.555222    4673 round_trippers.go:580]     Audit-Id: e00ff64f-7619-4c62-a778-75045c1c7929
	I0728 18:46:29.555228    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:29.555235    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:29.555239    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:29.555244    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:29.555249    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:29 GMT
	I0728 18:46:29.555444    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:29.555835    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:29.555842    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:29.555867    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:29.555871    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:29.556965    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:29.556973    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:29.556977    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:29.556980    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:29.556983    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:29.556985    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:29.556987    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:29 GMT
	I0728 18:46:29.556990    4673 round_trippers.go:580]     Audit-Id: 1f84a3a8-16b2-46ae-a170-de3a8478c9ce
	I0728 18:46:29.557155    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:30.054958    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:30.054987    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:30.055028    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:30.055056    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:30.057543    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:30.057558    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:30.057566    4673 round_trippers.go:580]     Audit-Id: 895d7393-eae6-446d-a2db-ca87945e4250
	I0728 18:46:30.057570    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:30.057575    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:30.057585    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:30.057588    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:30.057593    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:30 GMT
	I0728 18:46:30.057759    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:30.058124    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:30.058135    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:30.058142    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:30.058155    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:30.059555    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:30.059566    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:30.059572    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:30 GMT
	I0728 18:46:30.059576    4673 round_trippers.go:580]     Audit-Id: 9a197be8-58f8-4102-982b-89136f4cd198
	I0728 18:46:30.059580    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:30.059583    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:30.059586    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:30.059589    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:30.059964    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:30.554278    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:30.554292    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:30.554297    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:30.554301    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:30.556057    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:30.556067    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:30.556072    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:30.556075    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:30 GMT
	I0728 18:46:30.556077    4673 round_trippers.go:580]     Audit-Id: 48db864f-ca2b-426c-ae18-a8e0a382a5a0
	I0728 18:46:30.556080    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:30.556083    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:30.556087    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:30.556323    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:30.556631    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:30.556638    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:30.556644    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:30.556647    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:30.557723    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:30.557732    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:30.557739    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:30.557744    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:30.557748    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:30.557753    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:30 GMT
	I0728 18:46:30.557758    4673 round_trippers.go:580]     Audit-Id: 3ab55ac0-1676-4c2c-961a-c138c0a1662f
	I0728 18:46:30.557763    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:30.557871    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:31.053439    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:31.053462    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:31.053471    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:31.053477    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:31.056264    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:31.056276    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:31.056285    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:31.056290    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:31 GMT
	I0728 18:46:31.056295    4673 round_trippers.go:580]     Audit-Id: 2f1121a9-5bdd-4328-bc9e-25bdc3609014
	I0728 18:46:31.056307    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:31.056312    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:31.056316    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:31.056549    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:31.056933    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:31.056943    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:31.056950    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:31.056955    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:31.058234    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:31.058243    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:31.058248    4673 round_trippers.go:580]     Audit-Id: cb1346b3-8032-445b-b881-e62088da4b16
	I0728 18:46:31.058265    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:31.058270    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:31.058273    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:31.058276    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:31.058279    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:31 GMT
	I0728 18:46:31.058381    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:31.058559    4673 pod_ready.go:102] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"False"
	I0728 18:46:31.553220    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:31.553243    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:31.553256    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:31.553262    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:31.557537    4673 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 18:46:31.557550    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:31.557555    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:31.557558    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:31.557561    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:31.557563    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:31.557566    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:31 GMT
	I0728 18:46:31.557568    4673 round_trippers.go:580]     Audit-Id: 87141df1-a26a-4671-8a8c-11268925a051
	I0728 18:46:31.558343    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:31.558644    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:31.558651    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:31.558661    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:31.558664    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:31.560738    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:31.560748    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:31.560753    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:31 GMT
	I0728 18:46:31.560757    4673 round_trippers.go:580]     Audit-Id: 27273db0-15ee-4dc6-8fb5-ddd50941d5bb
	I0728 18:46:31.560760    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:31.560764    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:31.560767    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:31.560769    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:31.560873    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:32.053170    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:32.053191    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:32.053203    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:32.053208    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:32.055427    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:32.055440    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:32.055447    4673 round_trippers.go:580]     Audit-Id: 0c8a0008-7129-47bf-b950-18080b06b05b
	I0728 18:46:32.055453    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:32.055459    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:32.055466    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:32.055471    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:32.055475    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:32 GMT
	I0728 18:46:32.055747    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:32.056111    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:32.056121    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:32.056129    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:32.056134    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:32.057374    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:32.057385    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:32.057392    4673 round_trippers.go:580]     Audit-Id: f23c9011-be36-4abf-8309-5bdba4eab32a
	I0728 18:46:32.057397    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:32.057400    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:32.057403    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:32.057407    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:32.057411    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:32 GMT
	I0728 18:46:32.057563    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:32.553289    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:32.553311    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:32.553322    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:32.553328    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:32.555771    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:32.555784    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:32.555791    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:32 GMT
	I0728 18:46:32.555794    4673 round_trippers.go:580]     Audit-Id: 0f167ca1-4904-46dd-8c2b-76e9cd0083a6
	I0728 18:46:32.555797    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:32.555800    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:32.555804    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:32.555812    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:32.556047    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:32.556433    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:32.556443    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:32.556451    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:32.556456    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:32.557912    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:32.557921    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:32.557926    4673 round_trippers.go:580]     Audit-Id: aacde14a-6e0c-4e38-b773-e34483fddd92
	I0728 18:46:32.557930    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:32.557934    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:32.557937    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:32.557940    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:32.557943    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:32 GMT
	I0728 18:46:32.558019    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:33.054431    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:33.054454    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:33.054466    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:33.054475    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:33.057190    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:33.057201    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:33.057208    4673 round_trippers.go:580]     Audit-Id: aa7ee3cb-ddab-4d90-ac64-82b9f4a8b7ca
	I0728 18:46:33.057214    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:33.057219    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:33.057223    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:33.057226    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:33.057229    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:33 GMT
	I0728 18:46:33.057644    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:33.058003    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:33.058012    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:33.058017    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:33.058023    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:33.059014    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:46:33.059023    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:33.059028    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:33.059041    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:33.059045    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:33 GMT
	I0728 18:46:33.059048    4673 round_trippers.go:580]     Audit-Id: 382aab00-53b9-4fb6-8552-fe5d40a50ae6
	I0728 18:46:33.059051    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:33.059054    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:33.059166    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:33.059349    4673 pod_ready.go:102] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"False"
	I0728 18:46:33.553011    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:33.553036    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:33.553118    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:33.553129    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:33.555509    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:33.555519    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:33.555549    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:33.555582    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:33.555606    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:33.555616    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:33.555621    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:33 GMT
	I0728 18:46:33.555625    4673 round_trippers.go:580]     Audit-Id: 4be7f138-2fcb-4bee-9667-7c2ce37a2796
	I0728 18:46:33.556023    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:33.556380    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:33.556387    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:33.556393    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:33.556396    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:33.557489    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:33.557497    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:33.557501    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:33.557505    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:33 GMT
	I0728 18:46:33.557508    4673 round_trippers.go:580]     Audit-Id: 0a91cc0e-e80f-4f58-b06c-4643f2e9cba1
	I0728 18:46:33.557512    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:33.557516    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:33.557520    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:33.557674    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:34.053940    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:34.053955    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:34.053961    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:34.053966    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:34.056143    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:34.056161    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:34.056170    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:34.056174    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:34.056178    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:34 GMT
	I0728 18:46:34.056182    4673 round_trippers.go:580]     Audit-Id: 72e823cd-721c-4cbc-973c-e397e6ff85b8
	I0728 18:46:34.056192    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:34.056195    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:34.056325    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:34.056615    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:34.056622    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:34.056627    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:34.056629    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:34.060741    4673 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 18:46:34.060753    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:34.060759    4673 round_trippers.go:580]     Audit-Id: 9dc3456c-f263-47de-8e08-57a1000e34df
	I0728 18:46:34.060762    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:34.060765    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:34.060767    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:34.060776    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:34.060779    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:34 GMT
	I0728 18:46:34.060852    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:34.554369    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:34.554392    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:34.554402    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:34.554409    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:34.557083    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:34.557101    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:34.557111    4673 round_trippers.go:580]     Audit-Id: da982c2d-805c-4f22-97d4-f9af6ab9ff8a
	I0728 18:46:34.557116    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:34.557119    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:34.557123    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:34.557125    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:34.557130    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:34 GMT
	I0728 18:46:34.557219    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"841","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0728 18:46:34.557599    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:34.557608    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:34.557616    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:34.557623    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:34.559012    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:34.559026    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:34.559037    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:34.559054    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:34.559065    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:34.559101    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:34 GMT
	I0728 18:46:34.559110    4673 round_trippers.go:580]     Audit-Id: 117c58c2-c316-4dc5-8944-1a741a9e6f82
	I0728 18:46:34.559115    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:34.559231    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:35.054383    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:46:35.054406    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.054419    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.054424    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.057108    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:35.057128    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.057136    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.057140    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.057144    4673 round_trippers.go:580]     Audit-Id: dacaa515-204d-487c-8c08-421f1408e92f
	I0728 18:46:35.057160    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.057166    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.057169    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.057254    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"1001","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0728 18:46:35.057629    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:35.057638    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.057646    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.057650    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.059123    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:35.059131    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.059136    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.059139    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.059142    4673 round_trippers.go:580]     Audit-Id: da634a22-0db1-48b7-9407-3e90ce62a5ec
	I0728 18:46:35.059144    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.059147    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.059149    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.059235    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:35.059457    4673 pod_ready.go:92] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:35.059478    4673 pod_ready.go:81] duration metric: took 13.006630167s for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.059504    4673 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.059531    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-362000
	I0728 18:46:35.059536    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.059541    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.059558    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.060683    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:35.060689    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.060693    4673 round_trippers.go:580]     Audit-Id: 54dc2837-8cdd-449b-acda-f2d4dfa6063a
	I0728 18:46:35.060697    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.060700    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.060716    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.060721    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.060725    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.060858    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-362000","namespace":"kube-system","uid":"7b75e781-36f1-4f6f-99a4-808974571bcd","resourceVersion":"971","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.mirror":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.seen":"2024-07-29T01:39:56.230156002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6357 chars]
	I0728 18:46:35.061068    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:35.061080    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.061086    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.061090    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.062095    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:46:35.062104    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.062112    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.062142    4673 round_trippers.go:580]     Audit-Id: 378f0039-e672-4a84-a68e-01c8a3cf8201
	I0728 18:46:35.062149    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.062155    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.062159    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.062162    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.062285    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:35.062449    4673 pod_ready.go:92] pod "etcd-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:35.062457    4673 pod_ready.go:81] duration metric: took 2.948208ms for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.062466    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.062501    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-362000
	I0728 18:46:35.062506    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.062511    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.062515    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.063872    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:35.063880    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.063885    4673 round_trippers.go:580]     Audit-Id: 988d663d-2973-4c67-a678-e674a3485aa4
	I0728 18:46:35.063889    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.063892    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.063896    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.063898    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.063900    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.064101    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-362000","namespace":"kube-system","uid":"95b0fc9b-aad1-47ad-ae00-439b4e4b905a","resourceVersion":"961","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.mirror":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.seen":"2024-07-29T01:39:56.230158669Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0728 18:46:35.064330    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:35.064337    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.064342    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.064345    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.065310    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:46:35.065318    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.065322    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.065325    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.065336    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.065339    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.065356    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.065362    4673 round_trippers.go:580]     Audit-Id: aceaf56d-797d-495a-9f24-2c2e1eb93604
	I0728 18:46:35.065495    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:35.065659    4673 pod_ready.go:92] pod "kube-apiserver-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:35.065667    4673 pod_ready.go:81] duration metric: took 3.195535ms for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.065673    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.065702    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-362000
	I0728 18:46:35.065707    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.065712    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.065716    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.066537    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:46:35.066544    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.066550    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.066554    4673 round_trippers.go:580]     Audit-Id: acce14da-783e-48b8-847d-5f0b73f047c8
	I0728 18:46:35.066572    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.066578    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.066581    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.066584    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.066704    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-362000","namespace":"kube-system","uid":"5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9","resourceVersion":"969","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.mirror":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.seen":"2024-07-29T01:39:56.230159555Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0728 18:46:35.066934    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:35.066940    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.066946    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.066950    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.067796    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:46:35.067802    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.067805    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.067808    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.067811    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.067815    4673 round_trippers.go:580]     Audit-Id: 79dcf1d9-dbb7-4576-9e35-15e921ae005c
	I0728 18:46:35.067818    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.067820    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.067978    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:35.068161    4673 pod_ready.go:92] pod "kube-controller-manager-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:35.068168    4673 pod_ready.go:81] duration metric: took 2.490787ms for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.068175    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7gm24" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.068203    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7gm24
	I0728 18:46:35.068208    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.068213    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.068217    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.069147    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:46:35.069155    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.069160    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.069164    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.069168    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.069171    4673 round_trippers.go:580]     Audit-Id: 7fef5e1c-ccad-48d3-bef1-dae798419617
	I0728 18:46:35.069174    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.069177    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.069347    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7gm24","generateName":"kube-proxy-","namespace":"kube-system","uid":"9db42267-b01f-40a3-bf21-c4d8cf6fb372","resourceVersion":"791","creationTimestamp":"2024-07-29T01:44:55Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0728 18:46:35.069575    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m03
	I0728 18:46:35.069582    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.069587    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.069591    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.070457    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:46:35.070465    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.070470    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.070474    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.070485    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.070489    4673 round_trippers.go:580]     Audit-Id: 6a83c6ff-a36b-438b-aefc-653486499cfe
	I0728 18:46:35.070491    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.070494    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.070808    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m03","uid":"f2047331-d0da-470e-8da5-7b725a7d5c49","resourceVersion":"818","creationTimestamp":"2024-07-29T01:44:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_44_56_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3142 chars]
	I0728 18:46:35.070938    4673 pod_ready.go:92] pod "kube-proxy-7gm24" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:35.070945    4673 pod_ready.go:81] duration metric: took 2.764802ms for pod "kube-proxy-7gm24" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.070950    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dzz6p" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.255986    4673 request.go:629] Waited for 185.000378ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzz6p
	I0728 18:46:35.256123    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzz6p
	I0728 18:46:35.256133    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.256143    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.256148    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.258705    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:35.258720    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.258727    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.258731    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.258755    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.258762    4673 round_trippers.go:580]     Audit-Id: 7a8a77c4-9da9-4d4b-b976-46a852f0b4b4
	I0728 18:46:35.258768    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.258771    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.259140    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dzz6p","generateName":"kube-proxy-","namespace":"kube-system","uid":"577d6ba2-e17a-426f-8315-1688766fa435","resourceVersion":"488","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0728 18:46:35.454893    4673 request.go:629] Waited for 195.324739ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:46:35.454962    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:46:35.454972    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.454983    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.454991    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.457268    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:35.457281    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.457288    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.457292    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.457302    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.457307    4673 round_trippers.go:580]     Audit-Id: 8100b577-1beb-4ec4-98d5-6b4144066370
	I0728 18:46:35.457311    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.457314    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.457426    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90","resourceVersion":"552","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_40_51_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0728 18:46:35.457643    4673 pod_ready.go:92] pod "kube-proxy-dzz6p" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:35.457654    4673 pod_ready.go:81] duration metric: took 386.700912ms for pod "kube-proxy-dzz6p" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.457663    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.654375    4673 request.go:629] Waited for 196.660537ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:46:35.654484    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:46:35.654501    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.654513    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.654521    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.656988    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:35.657002    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.657009    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.657013    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.657017    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.657020    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:35 GMT
	I0728 18:46:35.657024    4673 round_trippers.go:580]     Audit-Id: 935ea276-b47a-4af4-801e-20cc74a065b8
	I0728 18:46:35.657029    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.657215    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tz5h5","generateName":"kube-proxy-","namespace":"kube-system","uid":"f791f783-464c-485b-9eda-97a5f857cca4","resourceVersion":"974","creationTimestamp":"2024-07-29T01:40:09Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0728 18:46:35.854776    4673 request.go:629] Waited for 197.226685ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:35.854827    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:35.854835    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:35.854844    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:35.854850    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:35.857440    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:35.857453    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:35.857460    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:35.857469    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:35.857475    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:35.857479    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:36 GMT
	I0728 18:46:35.857484    4673 round_trippers.go:580]     Audit-Id: ab6ca91d-53c7-4f2f-86d8-40bae10da2d6
	I0728 18:46:35.857497    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:35.857891    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:35.858153    4673 pod_ready.go:92] pod "kube-proxy-tz5h5" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:35.858165    4673 pod_ready.go:81] duration metric: took 400.49922ms for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:35.858174    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:36.054446    4673 request.go:629] Waited for 196.226607ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:46:36.054567    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:46:36.054580    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:36.054591    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:36.054598    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:36.057082    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:36.057096    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:36.057104    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:36.057108    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:36.057112    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:36.057116    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:36.057119    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:36 GMT
	I0728 18:46:36.057123    4673 round_trippers.go:580]     Audit-Id: f9db7be3-b20c-42bc-a3b7-a9c9b502c232
	I0728 18:46:36.057234    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-362000","namespace":"kube-system","uid":"0299d0c0-d45d-45ee-9b8e-b5900e92694b","resourceVersion":"970","creationTimestamp":"2024-07-29T01:39:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.mirror":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.seen":"2024-07-29T01:39:50.867492603Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0728 18:46:36.255540    4673 request.go:629] Waited for 197.989919ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:36.255579    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:46:36.255589    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:36.255598    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:36.255604    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:36.257516    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:36.257528    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:36.257534    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:36.257538    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:36.257541    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:36.257545    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:36 GMT
	I0728 18:46:36.257548    4673 round_trippers.go:580]     Audit-Id: 25cd92d4-31ad-4d49-90d7-18d54faddb30
	I0728 18:46:36.257552    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:36.257853    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:46:36.258126    4673 pod_ready.go:92] pod "kube-scheduler-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:46:36.258135    4673 pod_ready.go:81] duration metric: took 399.957319ms for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:46:36.258141    4673 pod_ready.go:38] duration metric: took 14.210858858s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:46:36.258155    4673 api_server.go:52] waiting for apiserver process to appear ...
	I0728 18:46:36.258205    4673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:46:36.272439    4673 command_runner.go:130] > 1742
	I0728 18:46:36.272689    4673 api_server.go:72] duration metric: took 31.008584578s to wait for apiserver process to appear ...
	I0728 18:46:36.272698    4673 api_server.go:88] waiting for apiserver healthz status ...
	I0728 18:46:36.272707    4673 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:46:36.276033    4673 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0728 18:46:36.276063    4673 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0728 18:46:36.276067    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:36.276085    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:36.276093    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:36.276656    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:46:36.276664    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:36.276669    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:36.276678    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:36.276682    4673 round_trippers.go:580]     Content-Length: 263
	I0728 18:46:36.276685    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:36 GMT
	I0728 18:46:36.276694    4673 round_trippers.go:580]     Audit-Id: 5a8e0660-3971-49a5-be49-8d5b3568bdfb
	I0728 18:46:36.276702    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:36.276704    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:36.276718    4673 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0728 18:46:36.276739    4673 api_server.go:141] control plane version: v1.30.3
	I0728 18:46:36.276748    4673 api_server.go:131] duration metric: took 4.045315ms to wait for apiserver health ...
	I0728 18:46:36.276752    4673 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 18:46:36.454343    4673 request.go:629] Waited for 177.560441ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:46:36.454475    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:46:36.454480    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:36.454531    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:36.454534    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:36.457584    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:36.457596    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:36.457604    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:36.457609    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:36 GMT
	I0728 18:46:36.457616    4673 round_trippers.go:580]     Audit-Id: 00b0fcfe-ae65-4915-980b-2ee6e8c13970
	I0728 18:46:36.457620    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:36.457622    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:36.457633    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:36.458950    4673 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1008"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"1001","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86400 chars]
	I0728 18:46:36.461082    4673 system_pods.go:59] 12 kube-system pods found
	I0728 18:46:36.461117    4673 system_pods.go:61] "coredns-7db6d8ff4d-8npcw" [a0fcbb6f-1182-4d9e-bc04-456f1b4de1db] Running
	I0728 18:46:36.461120    4673 system_pods.go:61] "etcd-multinode-362000" [7b75e781-36f1-4f6f-99a4-808974571bcd] Running
	I0728 18:46:36.461123    4673 system_pods.go:61] "kindnet-4mw5v" [053773ee-043a-48e0-9f70-411430b19acd] Running
	I0728 18:46:36.461128    4673 system_pods.go:61] "kindnet-5dhhf" [e124802a-dbb6-4100-8c49-8a75ea05217a] Running
	I0728 18:46:36.461133    4673 system_pods.go:61] "kindnet-8hhwv" [487e32b7-7175-4187-89ba-90bb4d597681] Running
	I0728 18:46:36.461136    4673 system_pods.go:61] "kube-apiserver-multinode-362000" [95b0fc9b-aad1-47ad-ae00-439b4e4b905a] Running
	I0728 18:46:36.461143    4673 system_pods.go:61] "kube-controller-manager-multinode-362000" [5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9] Running
	I0728 18:46:36.461147    4673 system_pods.go:61] "kube-proxy-7gm24" [9db42267-b01f-40a3-bf21-c4d8cf6fb372] Running
	I0728 18:46:36.461149    4673 system_pods.go:61] "kube-proxy-dzz6p" [577d6ba2-e17a-426f-8315-1688766fa435] Running
	I0728 18:46:36.461152    4673 system_pods.go:61] "kube-proxy-tz5h5" [f791f783-464c-485b-9eda-97a5f857cca4] Running
	I0728 18:46:36.461154    4673 system_pods.go:61] "kube-scheduler-multinode-362000" [0299d0c0-d45d-45ee-9b8e-b5900e92694b] Running
	I0728 18:46:36.461158    4673 system_pods.go:61] "storage-provisioner" [9032906f-5102-4224-b894-d541cf7d67e7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0728 18:46:36.461163    4673 system_pods.go:74] duration metric: took 184.408643ms to wait for pod list to return data ...
	I0728 18:46:36.461195    4673 default_sa.go:34] waiting for default service account to be created ...
	I0728 18:46:36.655695    4673 request.go:629] Waited for 194.407341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0728 18:46:36.655774    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0728 18:46:36.655782    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:36.655792    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:36.655799    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:36.658351    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:36.658365    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:36.658372    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:36.658377    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:36.658380    4673 round_trippers.go:580]     Content-Length: 262
	I0728 18:46:36.658392    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:36 GMT
	I0728 18:46:36.658395    4673 round_trippers.go:580]     Audit-Id: e30dfd57-31ba-4f5b-b764-5ca09573e21c
	I0728 18:46:36.658400    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:36.658404    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:36.658417    4673 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1008"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"379c0dca-8465-4249-afbe-a226c72714a6","resourceVersion":"334","creationTimestamp":"2024-07-29T01:40:10Z"}}]}
	I0728 18:46:36.658589    4673 default_sa.go:45] found service account: "default"
	I0728 18:46:36.658602    4673 default_sa.go:55] duration metric: took 197.402552ms for default service account to be created ...
	I0728 18:46:36.658609    4673 system_pods.go:116] waiting for k8s-apps to be running ...
	I0728 18:46:36.855067    4673 request.go:629] Waited for 196.404299ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:46:36.855222    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:46:36.855233    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:36.855254    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:36.855264    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:36.858883    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:46:36.858899    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:36.858909    4673 round_trippers.go:580]     Audit-Id: a700e568-5bf5-4e76-b117-bcb58a728fa3
	I0728 18:46:36.858917    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:36.858923    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:36.858929    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:36.858933    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:36.858938    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:37 GMT
	I0728 18:46:36.860402    4673 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1008"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"1001","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86400 chars]
	I0728 18:46:36.862307    4673 system_pods.go:86] 12 kube-system pods found
	I0728 18:46:36.862318    4673 system_pods.go:89] "coredns-7db6d8ff4d-8npcw" [a0fcbb6f-1182-4d9e-bc04-456f1b4de1db] Running
	I0728 18:46:36.862323    4673 system_pods.go:89] "etcd-multinode-362000" [7b75e781-36f1-4f6f-99a4-808974571bcd] Running
	I0728 18:46:36.862326    4673 system_pods.go:89] "kindnet-4mw5v" [053773ee-043a-48e0-9f70-411430b19acd] Running
	I0728 18:46:36.862330    4673 system_pods.go:89] "kindnet-5dhhf" [e124802a-dbb6-4100-8c49-8a75ea05217a] Running
	I0728 18:46:36.862334    4673 system_pods.go:89] "kindnet-8hhwv" [487e32b7-7175-4187-89ba-90bb4d597681] Running
	I0728 18:46:36.862337    4673 system_pods.go:89] "kube-apiserver-multinode-362000" [95b0fc9b-aad1-47ad-ae00-439b4e4b905a] Running
	I0728 18:46:36.862340    4673 system_pods.go:89] "kube-controller-manager-multinode-362000" [5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9] Running
	I0728 18:46:36.862347    4673 system_pods.go:89] "kube-proxy-7gm24" [9db42267-b01f-40a3-bf21-c4d8cf6fb372] Running
	I0728 18:46:36.862351    4673 system_pods.go:89] "kube-proxy-dzz6p" [577d6ba2-e17a-426f-8315-1688766fa435] Running
	I0728 18:46:36.862354    4673 system_pods.go:89] "kube-proxy-tz5h5" [f791f783-464c-485b-9eda-97a5f857cca4] Running
	I0728 18:46:36.862358    4673 system_pods.go:89] "kube-scheduler-multinode-362000" [0299d0c0-d45d-45ee-9b8e-b5900e92694b] Running
	I0728 18:46:36.862363    4673 system_pods.go:89] "storage-provisioner" [9032906f-5102-4224-b894-d541cf7d67e7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0728 18:46:36.862368    4673 system_pods.go:126] duration metric: took 203.756211ms to wait for k8s-apps to be running ...
	I0728 18:46:36.862373    4673 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 18:46:36.862422    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:46:36.874191    4673 system_svc.go:56] duration metric: took 11.813962ms WaitForService to wait for kubelet
	I0728 18:46:36.874209    4673 kubeadm.go:582] duration metric: took 31.61010905s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:46:36.874221    4673 node_conditions.go:102] verifying NodePressure condition ...
	I0728 18:46:37.055720    4673 request.go:629] Waited for 181.451407ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0728 18:46:37.055857    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0728 18:46:37.055868    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:37.055876    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:37.055884    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:37.058412    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:37.058425    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:37.058433    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:37.058437    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:37.058440    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:37 GMT
	I0728 18:46:37.058444    4673 round_trippers.go:580]     Audit-Id: cb9bf5f0-9a22-4094-8dd3-972ad61b1792
	I0728 18:46:37.058448    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:37.058451    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:37.058945    4673 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1008"},"items":[{"metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 14177 chars]
	I0728 18:46:37.059487    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:46:37.059500    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:46:37.059510    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:46:37.059518    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:46:37.059532    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:46:37.059536    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:46:37.059541    4673 node_conditions.go:105] duration metric: took 185.312345ms to run NodePressure ...
	I0728 18:46:37.059551    4673 start.go:241] waiting for startup goroutines ...
	I0728 18:46:37.059559    4673 start.go:246] waiting for cluster config update ...
	I0728 18:46:37.059573    4673 start.go:255] writing updated cluster config ...
	I0728 18:46:37.080324    4673 out.go:177] 
	I0728 18:46:37.102477    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:37.102625    4673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:46:37.126122    4673 out.go:177] * Starting "multinode-362000-m02" worker node in "multinode-362000" cluster
	I0728 18:46:37.169063    4673 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:46:37.169099    4673 cache.go:56] Caching tarball of preloaded images
	I0728 18:46:37.169314    4673 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 18:46:37.169335    4673 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:46:37.169472    4673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:46:37.170711    4673 start.go:360] acquireMachinesLock for multinode-362000-m02: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:37.170834    4673 start.go:364] duration metric: took 97.592µs to acquireMachinesLock for "multinode-362000-m02"
	I0728 18:46:37.170860    4673 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:46:37.170868    4673 fix.go:54] fixHost starting: m02
	I0728 18:46:37.171310    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:46:37.171338    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:46:37.180385    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52877
	I0728 18:46:37.180766    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:46:37.181099    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:46:37.181110    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:46:37.181327    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:46:37.181459    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:37.181557    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetState
	I0728 18:46:37.181637    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:46:37.181723    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:46:37.182624    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid 4486 missing from process table
	I0728 18:46:37.182663    4673 fix.go:112] recreateIfNeeded on multinode-362000-m02: state=Stopped err=<nil>
	I0728 18:46:37.182699    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	W0728 18:46:37.182776    4673 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:46:37.203928    4673 out.go:177] * Restarting existing hyperkit VM for "multinode-362000-m02" ...
	I0728 18:46:37.245921    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .Start
	I0728 18:46:37.246363    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:46:37.246420    4673 main.go:141] libmachine: (multinode-362000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/hyperkit.pid
	I0728 18:46:37.248123    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid 4486 missing from process table
	I0728 18:46:37.248141    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | pid 4486 is in state "Stopped"
	I0728 18:46:37.248164    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/hyperkit.pid...
	I0728 18:46:37.248742    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | Using UUID 803737f6-60f1-4d1a-bdda-22c83e05ebd1
	I0728 18:46:37.275290    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | Generated MAC 6:55:c7:17:95:12
	I0728 18:46:37.275312    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000
	I0728 18:46:37.275454    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"803737f6-60f1-4d1a-bdda-22c83e05ebd1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000405350)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0728 18:46:37.275488    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"803737f6-60f1-4d1a-bdda-22c83e05ebd1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000405350)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0728 18:46:37.275537    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "803737f6-60f1-4d1a-bdda-22c83e05ebd1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/multinode-362000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/bzimage,/Users/j
enkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"}
	I0728 18:46:37.275574    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 803737f6-60f1-4d1a-bdda-22c83e05ebd1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/multinode-362000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/mult
inode-362000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"
	I0728 18:46:37.275583    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 18:46:37.277050    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 DEBUG: hyperkit: Pid is 4695
	I0728 18:46:37.277444    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | Attempt 0
	I0728 18:46:37.277479    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:46:37.278136    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4695
	I0728 18:46:37.279153    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | Searching for 6:55:c7:17:95:12 in /var/db/dhcpd_leases ...
	I0728 18:46:37.279247    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0728 18:46:37.279263    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a845cb}
	I0728 18:46:37.279287    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a6f430}
	I0728 18:46:37.279301    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a84496}
	I0728 18:46:37.279315    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | Found match: 6:55:c7:17:95:12
	I0728 18:46:37.279327    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | IP: 192.169.0.14
	I0728 18:46:37.279358    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetConfigRaw
	I0728 18:46:37.280046    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:46:37.280241    4673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:46:37.280726    4673 machine.go:94] provisionDockerMachine start ...
	I0728 18:46:37.280738    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:37.280865    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:37.280969    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:37.281063    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:37.281149    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:37.281225    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:37.281387    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:46:37.281571    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:46:37.281579    4673 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 18:46:37.285163    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 18:46:37.293106    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 18:46:37.294195    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:46:37.294211    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:46:37.294219    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:46:37.294227    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:46:37.678909    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 18:46:37.678928    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 18:46:37.793676    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:46:37.793707    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:46:37.793717    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:46:37.793731    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:46:37.794515    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 18:46:37.794524    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:37 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 18:46:43.388045    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:43 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0728 18:46:43.388113    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:43 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0728 18:46:43.388123    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:43 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0728 18:46:43.411630    4673 main.go:141] libmachine: (multinode-362000-m02) DBG | 2024/07/28 18:46:43 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0728 18:46:48.338747    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0728 18:46:48.338763    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetMachineName
	I0728 18:46:48.338902    4673 buildroot.go:166] provisioning hostname "multinode-362000-m02"
	I0728 18:46:48.338914    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetMachineName
	I0728 18:46:48.339003    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:48.339080    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:48.339173    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.339249    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.339327    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:48.339462    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:46:48.339605    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:46:48.339614    4673 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-362000-m02 && echo "multinode-362000-m02" | sudo tee /etc/hostname
	I0728 18:46:48.399738    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-362000-m02
	
	I0728 18:46:48.399753    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:48.399878    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:48.399983    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.400072    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.400176    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:48.400303    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:46:48.400441    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:46:48.400452    4673 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-362000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-362000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-362000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 18:46:48.454950    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:46:48.454974    4673 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 18:46:48.454993    4673 buildroot.go:174] setting up certificates
	I0728 18:46:48.454999    4673 provision.go:84] configureAuth start
	I0728 18:46:48.455006    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetMachineName
	I0728 18:46:48.455155    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:46:48.455258    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:48.455356    4673 provision.go:143] copyHostCerts
	I0728 18:46:48.455387    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:46:48.455451    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 18:46:48.455457    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:46:48.455838    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 18:46:48.456074    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:46:48.456115    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 18:46:48.456120    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:46:48.456222    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 18:46:48.456370    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:46:48.456412    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 18:46:48.456417    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:46:48.456517    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 18:46:48.456687    4673 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.multinode-362000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-362000-m02]
	I0728 18:46:48.562747    4673 provision.go:177] copyRemoteCerts
	I0728 18:46:48.562797    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 18:46:48.562812    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:48.562955    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:48.563073    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.563160    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:48.563248    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:46:48.594219    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 18:46:48.594286    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 18:46:48.613653    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 18:46:48.613720    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0728 18:46:48.633022    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 18:46:48.633087    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 18:46:48.652312    4673 provision.go:87] duration metric: took 197.30092ms to configureAuth
	I0728 18:46:48.652326    4673 buildroot.go:189] setting minikube options for container-runtime
	I0728 18:46:48.652490    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:48.652518    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:48.652647    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:48.652719    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:48.652809    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.652902    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.652987    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:48.653090    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:46:48.653211    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:46:48.653218    4673 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 18:46:48.701718    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 18:46:48.701730    4673 buildroot.go:70] root file system type: tmpfs
	I0728 18:46:48.701803    4673 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 18:46:48.701814    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:48.701938    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:48.702016    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.702108    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.702184    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:48.702318    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:46:48.702459    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:46:48.702507    4673 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 18:46:48.760488    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 18:46:48.760507    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:48.760654    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:48.760771    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.760872    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:48.760982    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:48.761116    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:46:48.761257    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:46:48.761270    4673 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 18:46:50.332441    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0728 18:46:50.332463    4673 machine.go:97] duration metric: took 13.051821636s to provisionDockerMachine
	I0728 18:46:50.332471    4673 start.go:293] postStartSetup for "multinode-362000-m02" (driver="hyperkit")
	I0728 18:46:50.332495    4673 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 18:46:50.332510    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:50.332723    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 18:46:50.332735    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:50.332845    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:50.332941    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:50.333040    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:50.333118    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:46:50.368917    4673 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 18:46:50.372602    4673 command_runner.go:130] > NAME=Buildroot
	I0728 18:46:50.372611    4673 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0728 18:46:50.372615    4673 command_runner.go:130] > ID=buildroot
	I0728 18:46:50.372619    4673 command_runner.go:130] > VERSION_ID=2023.02.9
	I0728 18:46:50.372623    4673 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0728 18:46:50.372712    4673 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 18:46:50.372720    4673 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 18:46:50.372817    4673 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 18:46:50.373004    4673 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 18:46:50.373010    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /etc/ssl/certs/15332.pem
	I0728 18:46:50.373216    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 18:46:50.385453    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:46:50.412967    4673 start.go:296] duration metric: took 80.473695ms for postStartSetup
	I0728 18:46:50.412990    4673 fix.go:56] duration metric: took 13.242218481s for fixHost
	I0728 18:46:50.413012    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:50.413158    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:50.413245    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:50.413340    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:50.413423    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:50.413545    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:46:50.413686    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0728 18:46:50.413694    4673 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0728 18:46:50.463985    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722217610.598970634
	
	I0728 18:46:50.463996    4673 fix.go:216] guest clock: 1722217610.598970634
	I0728 18:46:50.464002    4673 fix.go:229] Guest: 2024-07-28 18:46:50.598970634 -0700 PDT Remote: 2024-07-28 18:46:50.412997 -0700 PDT m=+72.030483613 (delta=185.973634ms)
	I0728 18:46:50.464012    4673 fix.go:200] guest clock delta is within tolerance: 185.973634ms
	I0728 18:46:50.464016    4673 start.go:83] releasing machines lock for "multinode-362000-m02", held for 13.293267871s
	I0728 18:46:50.464033    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:50.464157    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:46:50.484636    4673 out.go:177] * Found network options:
	I0728 18:46:50.505437    4673 out.go:177]   - NO_PROXY=192.169.0.13
	W0728 18:46:50.527352    4673 proxy.go:119] fail to check proxy env: Error ip not in block
	I0728 18:46:50.527391    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:50.528310    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:50.528592    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:50.528715    4673 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 18:46:50.528756    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	W0728 18:46:50.528835    4673 proxy.go:119] fail to check proxy env: Error ip not in block
	I0728 18:46:50.528942    4673 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0728 18:46:50.528963    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:50.528960    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:50.529192    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:50.529230    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:50.529340    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:50.529373    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:50.529487    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:46:50.529516    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:50.529633    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:46:50.556694    4673 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0728 18:46:50.556800    4673 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 18:46:50.556861    4673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 18:46:50.606414    4673 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0728 18:46:50.606446    4673 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0728 18:46:50.606457    4673 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 18:46:50.606466    4673 start.go:495] detecting cgroup driver to use...
	I0728 18:46:50.606561    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:46:50.621864    4673 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0728 18:46:50.622119    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 18:46:50.631126    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 18:46:50.640070    4673 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 18:46:50.640119    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 18:46:50.648931    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:46:50.657813    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 18:46:50.666736    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:46:50.675962    4673 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 18:46:50.685166    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 18:46:50.694029    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 18:46:50.702688    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 18:46:50.711634    4673 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 18:46:50.719728    4673 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0728 18:46:50.719881    4673 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 18:46:50.727966    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:46:50.824868    4673 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 18:46:50.842133    4673 start.go:495] detecting cgroup driver to use...
	I0728 18:46:50.842204    4673 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 18:46:50.856570    4673 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0728 18:46:50.856706    4673 command_runner.go:130] > [Unit]
	I0728 18:46:50.856714    4673 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 18:46:50.856718    4673 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 18:46:50.856729    4673 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0728 18:46:50.856734    4673 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0728 18:46:50.856738    4673 command_runner.go:130] > StartLimitBurst=3
	I0728 18:46:50.856742    4673 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 18:46:50.856746    4673 command_runner.go:130] > [Service]
	I0728 18:46:50.856749    4673 command_runner.go:130] > Type=notify
	I0728 18:46:50.856756    4673 command_runner.go:130] > Restart=on-failure
	I0728 18:46:50.856760    4673 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0728 18:46:50.856767    4673 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 18:46:50.856773    4673 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 18:46:50.856779    4673 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 18:46:50.856785    4673 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 18:46:50.856791    4673 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 18:46:50.856797    4673 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 18:46:50.856802    4673 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 18:46:50.856812    4673 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 18:46:50.856819    4673 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 18:46:50.856824    4673 command_runner.go:130] > ExecStart=
	I0728 18:46:50.856838    4673 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0728 18:46:50.856843    4673 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 18:46:50.856853    4673 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 18:46:50.856860    4673 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 18:46:50.856863    4673 command_runner.go:130] > LimitNOFILE=infinity
	I0728 18:46:50.856866    4673 command_runner.go:130] > LimitNPROC=infinity
	I0728 18:46:50.856870    4673 command_runner.go:130] > LimitCORE=infinity
	I0728 18:46:50.856875    4673 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 18:46:50.856879    4673 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 18:46:50.856882    4673 command_runner.go:130] > TasksMax=infinity
	I0728 18:46:50.856886    4673 command_runner.go:130] > TimeoutStartSec=0
	I0728 18:46:50.856894    4673 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 18:46:50.856897    4673 command_runner.go:130] > Delegate=yes
	I0728 18:46:50.856902    4673 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 18:46:50.856910    4673 command_runner.go:130] > KillMode=process
	I0728 18:46:50.856915    4673 command_runner.go:130] > [Install]
	I0728 18:46:50.856918    4673 command_runner.go:130] > WantedBy=multi-user.target
	I0728 18:46:50.857020    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:46:50.871266    4673 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 18:46:50.888814    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:46:50.899257    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:46:50.909517    4673 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 18:46:50.928866    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:46:50.940308    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:46:50.954963    4673 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 18:46:50.955343    4673 ssh_runner.go:195] Run: which cri-dockerd
	I0728 18:46:50.958224    4673 command_runner.go:130] > /usr/bin/cri-dockerd
	I0728 18:46:50.958383    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 18:46:50.965826    4673 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 18:46:50.979903    4673 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 18:46:51.080819    4673 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 18:46:51.185908    4673 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 18:46:51.185935    4673 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 18:46:51.199686    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:46:51.301774    4673 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:46:53.591374    4673 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.289598801s)
	I0728 18:46:53.591423    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0728 18:46:53.602727    4673 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0728 18:46:53.616558    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:46:53.627458    4673 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0728 18:46:53.721063    4673 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 18:46:53.827566    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:46:53.938284    4673 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0728 18:46:53.952100    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:46:53.963267    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:46:54.078472    4673 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0728 18:46:54.137534    4673 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 18:46:54.137615    4673 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 18:46:54.141915    4673 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0728 18:46:54.141930    4673 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0728 18:46:54.141935    4673 command_runner.go:130] > Device: 0,22	Inode: 745         Links: 1
	I0728 18:46:54.141940    4673 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0728 18:46:54.141944    4673 command_runner.go:130] > Access: 2024-07-29 01:46:54.227951753 +0000
	I0728 18:46:54.141955    4673 command_runner.go:130] > Modify: 2024-07-29 01:46:54.227951753 +0000
	I0728 18:46:54.141959    4673 command_runner.go:130] > Change: 2024-07-29 01:46:54.228951679 +0000
	I0728 18:46:54.141966    4673 command_runner.go:130] >  Birth: -
	I0728 18:46:54.141993    4673 start.go:563] Will wait 60s for crictl version
	I0728 18:46:54.142047    4673 ssh_runner.go:195] Run: which crictl
	I0728 18:46:54.144853    4673 command_runner.go:130] > /usr/bin/crictl
	I0728 18:46:54.144959    4673 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0728 18:46:54.171431    4673 command_runner.go:130] > Version:  0.1.0
	I0728 18:46:54.171446    4673 command_runner.go:130] > RuntimeName:  docker
	I0728 18:46:54.171450    4673 command_runner.go:130] > RuntimeVersion:  27.1.0
	I0728 18:46:54.171454    4673 command_runner.go:130] > RuntimeApiVersion:  v1
	I0728 18:46:54.172503    4673 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0728 18:46:54.172577    4673 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:46:54.191400    4673 command_runner.go:130] > 27.1.0
	I0728 18:46:54.192567    4673 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:46:54.209519    4673 command_runner.go:130] > 27.1.0
	I0728 18:46:54.231668    4673 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0728 18:46:54.273335    4673 out.go:177]   - env NO_PROXY=192.169.0.13
	I0728 18:46:54.294441    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:46:54.294833    4673 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0728 18:46:54.298880    4673 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:46:54.308145    4673 mustload.go:65] Loading cluster: multinode-362000
	I0728 18:46:54.308318    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:54.308552    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:46:54.308568    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:46:54.317259    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52898
	I0728 18:46:54.317604    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:46:54.317948    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:46:54.317964    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:46:54.318184    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:46:54.318294    4673 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:46:54.318377    4673 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:46:54.318467    4673 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4686
	I0728 18:46:54.319549    4673 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:46:54.319799    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:46:54.319816    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:46:54.328302    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52900
	I0728 18:46:54.328634    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:46:54.328960    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:46:54.328972    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:46:54.329182    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:46:54.329302    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:46:54.329393    4673 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000 for IP: 192.169.0.14
	I0728 18:46:54.329399    4673 certs.go:194] generating shared ca certs ...
	I0728 18:46:54.329411    4673 certs.go:226] acquiring lock for ca certs: {Name:mk64aac07da96a39ae6165406ad142fbce2d0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:46:54.329592    4673 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key
	I0728 18:46:54.329666    4673 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key
	I0728 18:46:54.329677    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0728 18:46:54.329700    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0728 18:46:54.329720    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0728 18:46:54.329738    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0728 18:46:54.329829    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem (1338 bytes)
	W0728 18:46:54.329879    4673 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533_empty.pem, impossibly tiny 0 bytes
	I0728 18:46:54.329889    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem (1675 bytes)
	I0728 18:46:54.329927    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem (1078 bytes)
	I0728 18:46:54.329958    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem (1123 bytes)
	I0728 18:46:54.329986    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem (1679 bytes)
	I0728 18:46:54.330048    4673 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:46:54.330086    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem -> /usr/share/ca-certificates/1533.pem
	I0728 18:46:54.330106    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /usr/share/ca-certificates/15332.pem
	I0728 18:46:54.330129    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:46:54.330155    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 18:46:54.350393    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 18:46:54.370269    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 18:46:54.389580    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 18:46:54.408538    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/1533.pem --> /usr/share/ca-certificates/1533.pem (1338 bytes)
	I0728 18:46:54.427454    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /usr/share/ca-certificates/15332.pem (1708 bytes)
	I0728 18:46:54.446481    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 18:46:54.465487    4673 ssh_runner.go:195] Run: openssl version
	I0728 18:46:54.469435    4673 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0728 18:46:54.469642    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1533.pem && ln -fs /usr/share/ca-certificates/1533.pem /etc/ssl/certs/1533.pem"
	I0728 18:46:54.478604    4673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1533.pem
	I0728 18:46:54.481736    4673 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 00:57 /usr/share/ca-certificates/1533.pem
	I0728 18:46:54.481923    4673 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:57 /usr/share/ca-certificates/1533.pem
	I0728 18:46:54.481961    4673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1533.pem
	I0728 18:46:54.485902    4673 command_runner.go:130] > 51391683
	I0728 18:46:54.486146    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1533.pem /etc/ssl/certs/51391683.0"
	I0728 18:46:54.495167    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15332.pem && ln -fs /usr/share/ca-certificates/15332.pem /etc/ssl/certs/15332.pem"
	I0728 18:46:54.504152    4673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15332.pem
	I0728 18:46:54.507315    4673 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 00:57 /usr/share/ca-certificates/15332.pem
	I0728 18:46:54.507464    4673 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:57 /usr/share/ca-certificates/15332.pem
	I0728 18:46:54.507502    4673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15332.pem
	I0728 18:46:54.511525    4673 command_runner.go:130] > 3ec20f2e
	I0728 18:46:54.511693    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15332.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 18:46:54.520673    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 18:46:54.529638    4673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:46:54.532773    4673 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:46:54.532893    4673 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:46:54.532924    4673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:46:54.536882    4673 command_runner.go:130] > b5213941
	I0728 18:46:54.537124    4673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 18:46:54.546070    4673 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0728 18:46:54.548944    4673 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0728 18:46:54.549055    4673 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0728 18:46:54.549088    4673 kubeadm.go:934] updating node {m02 192.169.0.14 8443 v1.30.3 docker false true} ...
	I0728 18:46:54.549144    4673 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-362000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0728 18:46:54.549183    4673 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0728 18:46:54.557127    4673 command_runner.go:130] > kubeadm
	I0728 18:46:54.557135    4673 command_runner.go:130] > kubectl
	I0728 18:46:54.557140    4673 command_runner.go:130] > kubelet
	I0728 18:46:54.557199    4673 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 18:46:54.557243    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0728 18:46:54.565192    4673 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0728 18:46:54.578634    4673 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 18:46:54.592042    4673 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0728 18:46:54.594943    4673 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:46:54.604909    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:46:54.700553    4673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:46:54.715123    4673 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:46:54.715394    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:46:54.715413    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:46:54.724542    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52902
	I0728 18:46:54.724903    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:46:54.725281    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:46:54.725304    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:46:54.725544    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:46:54.725668    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:46:54.725766    4673 start.go:317] joinCluster: &{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:46:54.725874    4673 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 18:46:54.725896    4673 host.go:66] Checking if "multinode-362000-m02" exists ...
	I0728 18:46:54.726163    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:46:54.726182    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:46:54.735248    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52904
	I0728 18:46:54.735604    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:46:54.735957    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:46:54.735976    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:46:54.736202    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:46:54.736322    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:46:54.736409    4673 mustload.go:65] Loading cluster: multinode-362000
	I0728 18:46:54.736584    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:54.736803    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:46:54.736822    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:46:54.745568    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52906
	I0728 18:46:54.745922    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:46:54.746253    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:46:54.746263    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:46:54.746483    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:46:54.746596    4673 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:46:54.746678    4673 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:46:54.746758    4673 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4686
	I0728 18:46:54.747695    4673 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:46:54.747964    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:46:54.747981    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:46:54.756703    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52908
	I0728 18:46:54.757040    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:46:54.757355    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:46:54.757366    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:46:54.757565    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:46:54.757681    4673 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:46:54.757774    4673 api_server.go:166] Checking apiserver status ...
	I0728 18:46:54.757820    4673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:46:54.757831    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:46:54.757905    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:46:54.758008    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:46:54.758106    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:46:54.758198    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:46:54.806335    4673 command_runner.go:130] > 1742
	I0728 18:46:54.806444    4673 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1742/cgroup
	W0728 18:46:54.815059    4673 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1742/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0728 18:46:54.815113    4673 ssh_runner.go:195] Run: ls
	I0728 18:46:54.818354    4673 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:46:54.821324    4673 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0728 18:46:54.821371    4673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl drain multinode-362000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0728 18:46:54.908684    4673 command_runner.go:130] > node/multinode-362000-m02 cordoned
	I0728 18:46:57.929209    4673 command_runner.go:130] > pod "busybox-fc5497c4f-svnlx" has DeletionTimestamp older than 1 seconds, skipping
	I0728 18:46:57.929285    4673 command_runner.go:130] > node/multinode-362000-m02 drained
	I0728 18:46:57.930945    4673 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-8hhwv, kube-system/kube-proxy-dzz6p
	I0728 18:46:57.931070    4673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl drain multinode-362000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.109696864s)
	I0728 18:46:57.931083    4673 node.go:128] successfully drained node "multinode-362000-m02"
	I0728 18:46:57.931113    4673 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0728 18:46:57.931135    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:46:57.931291    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:46:57.931385    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:46:57.931475    4673 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:46:57.931571    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:46:58.018108    4673 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 18:46:58.018262    4673 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0728 18:46:58.018301    4673 command_runner.go:130] > [reset] Stopping the kubelet service
	I0728 18:46:58.024499    4673 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0728 18:46:58.235360    4673 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0728 18:46:58.236942    4673 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0728 18:46:58.237014    4673 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0728 18:46:58.237025    4673 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0728 18:46:58.237031    4673 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0728 18:46:58.237036    4673 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0728 18:46:58.237041    4673 command_runner.go:130] > to reset your system's IPVS tables.
	I0728 18:46:58.237053    4673 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0728 18:46:58.237070    4673 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0728 18:46:58.237833    4673 command_runner.go:130] ! W0729 01:46:58.158346    1317 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0728 18:46:58.237859    4673 command_runner.go:130] ! W0729 01:46:58.375481    1317 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod ccdd12c4acff53ab3d996d68ff20e1434ae4b03bba8407120e64a2b4a503be78: output: E0729 01:46:58.285481    1347 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-svnlx_default\" network: cni config uninitialized" podSandboxID="ccdd12c4acff53ab3d996d68ff20e1434ae4b03bba8407120e64a2b4a503be78"
	I0728 18:46:58.237872    4673 command_runner.go:130] ! time="2024-07-29T01:46:58Z" level=fatal msg="stopping the pod sandbox \"ccdd12c4acff53ab3d996d68ff20e1434ae4b03bba8407120e64a2b4a503be78\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-svnlx_default\" network: cni config uninitialized"
	I0728 18:46:58.237876    4673 command_runner.go:130] ! : exit status 1
	I0728 18:46:58.237886    4673 node.go:155] successfully reset node "multinode-362000-m02"
	I0728 18:46:58.238162    4673 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:46:58.238385    4673 kapi.go:59] client config for multinode-362000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10bd5b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:46:58.238654    4673 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0728 18:46:58.238686    4673 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:46:58.238690    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:58.238695    4673 round_trippers.go:473]     Content-Type: application/json
	I0728 18:46:58.238699    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:58.238702    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:58.241342    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:46:58.241352    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:58.241357    4673 round_trippers.go:580]     Audit-Id: c133346d-1d9d-41d5-9bb8-01a0c040940d
	I0728 18:46:58.241367    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:58.241370    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:58.241373    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:58.241377    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:58.241381    4673 round_trippers.go:580]     Content-Length: 171
	I0728 18:46:58.241389    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:58 GMT
	I0728 18:46:58.241400    4673 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-362000-m02","kind":"nodes","uid":"1470d510-7ea6-41d4-bc22-26a39ad95c90"}}
	I0728 18:46:58.241417    4673 node.go:180] successfully deleted node "multinode-362000-m02"
	I0728 18:46:58.241424    4673 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 18:46:58.241442    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0728 18:46:58.241456    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:46:58.241610    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:46:58.241710    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:46:58.241832    4673 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:46:58.241924    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:46:58.340820    4673 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dhteq6.jo67xl499g7wortn --discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 
	I0728 18:46:58.340863    4673 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 18:46:58.340881    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dhteq6.jo67xl499g7wortn --discovery-token-ca-cert-hash sha256:ec7c74e396412b72eca1a30067f2206102f21263ed392ac701ce09074de572b3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-362000-m02"
	I0728 18:46:58.456507    4673 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0728 18:46:59.123671    4673 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 18:46:59.123686    4673 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0728 18:46:59.123694    4673 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0728 18:46:59.123712    4673 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0728 18:46:59.123723    4673 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0728 18:46:59.123728    4673 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0728 18:46:59.123736    4673 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0728 18:46:59.123741    4673 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.380082ms
	I0728 18:46:59.123745    4673 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0728 18:46:59.123749    4673 command_runner.go:130] > This node has joined the cluster:
	I0728 18:46:59.123755    4673 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0728 18:46:59.123760    4673 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0728 18:46:59.123765    4673 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0728 18:46:59.123788    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0728 18:46:59.332343    4673 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0728 18:46:59.332487    4673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-362000-m02 minikube.k8s.io/updated_at=2024_07_28T18_46_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=multinode-362000 minikube.k8s.io/primary=false
	I0728 18:46:59.404418    4673 command_runner.go:130] > node/multinode-362000-m02 labeled
	I0728 18:46:59.405437    4673 start.go:319] duration metric: took 4.679707009s to joinCluster
	I0728 18:46:59.405479    4673 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 18:46:59.405665    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:59.427713    4673 out.go:177] * Verifying Kubernetes components...
	I0728 18:46:59.469772    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:46:59.570321    4673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:46:59.581522    4673 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:46:59.581718    4673 kapi.go:59] client config for multinode-362000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10bd5b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:46:59.581899    4673 node_ready.go:35] waiting up to 6m0s for node "multinode-362000-m02" to be "Ready" ...
	I0728 18:46:59.581939    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:46:59.581944    4673 round_trippers.go:469] Request Headers:
	I0728 18:46:59.581949    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:46:59.581953    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:46:59.583579    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:46:59.583588    4673 round_trippers.go:577] Response Headers:
	I0728 18:46:59.583593    4673 round_trippers.go:580]     Audit-Id: 0be9369c-d23d-44aa-aa15-d62e88617b5a
	I0728 18:46:59.583596    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:46:59.583600    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:46:59.583607    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:46:59.583610    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:46:59.583612    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:46:59 GMT
	I0728 18:46:59.583687    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:00.082619    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:00.082645    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:00.082662    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:00.082747    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:00.085440    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:00.085454    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:00.085460    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:00.085481    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:00.085494    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:00.085499    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:00.085510    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:00 GMT
	I0728 18:47:00.085516    4673 round_trippers.go:580]     Audit-Id: a92f9e39-ec6f-499d-b288-17ddfa0dce67
	I0728 18:47:00.085597    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:00.583529    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:00.583544    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:00.583597    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:00.583602    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:00.585299    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:00.585308    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:00.585312    4673 round_trippers.go:580]     Audit-Id: 6ce7c3f0-0595-484c-af8d-fe3c0974c93e
	I0728 18:47:00.585316    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:00.585319    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:00.585321    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:00.585324    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:00.585328    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:00 GMT
	I0728 18:47:00.585498    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:01.083299    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:01.083332    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:01.083414    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:01.083424    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:01.086308    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:01.086328    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:01.086338    4673 round_trippers.go:580]     Audit-Id: d1797d5e-4346-4fea-93c6-ea3b534ad6f7
	I0728 18:47:01.086345    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:01.086352    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:01.086358    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:01.086363    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:01.086369    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:01 GMT
	I0728 18:47:01.086463    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:01.583575    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:01.583593    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:01.583599    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:01.583601    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:01.585325    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:01.585336    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:01.585342    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:01.585345    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:01.585348    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:01 GMT
	I0728 18:47:01.585351    4673 round_trippers.go:580]     Audit-Id: a1206081-4a79-4797-be1b-91493a445154
	I0728 18:47:01.585353    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:01.585355    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:01.585466    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:01.585645    4673 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:47:02.083477    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:02.083502    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:02.083592    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:02.083601    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:02.086148    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:02.086163    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:02.086173    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:02.086181    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:02.086188    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:02 GMT
	I0728 18:47:02.086194    4673 round_trippers.go:580]     Audit-Id: 8c627219-2e40-475e-b83f-266af5621abd
	I0728 18:47:02.086201    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:02.086205    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:02.086661    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:02.583535    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:02.583554    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:02.583562    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:02.583567    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:02.585924    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:02.585934    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:02.585940    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:02.585944    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:02 GMT
	I0728 18:47:02.585947    4673 round_trippers.go:580]     Audit-Id: 977c9a16-d746-4d94-8632-7a74cefa5500
	I0728 18:47:02.585949    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:02.585952    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:02.585954    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:02.586085    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:03.082097    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:03.082124    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:03.082135    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:03.082141    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:03.084163    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:03.084176    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:03.084183    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:03.084188    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:03.084194    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:03 GMT
	I0728 18:47:03.084199    4673 round_trippers.go:580]     Audit-Id: 678c63cb-f72b-49c1-8395-41fd933e6d38
	I0728 18:47:03.084205    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:03.084208    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:03.084280    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:03.583641    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:03.583683    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:03.583769    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:03.583779    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:03.586515    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:03.586531    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:03.586543    4673 round_trippers.go:580]     Audit-Id: 8c9f1775-db72-4369-856a-00cef1bc50ba
	I0728 18:47:03.586546    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:03.586551    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:03.586555    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:03.586559    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:03.586564    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:03 GMT
	I0728 18:47:03.586633    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:03.586842    4673 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:47:04.083562    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:04.083586    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:04.083657    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:04.083667    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:04.086339    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:04.086352    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:04.086370    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:04.086375    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:04.086378    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:04 GMT
	I0728 18:47:04.086382    4673 round_trippers.go:580]     Audit-Id: 5937a6af-6f15-4fea-8e5c-a4d3de23ff73
	I0728 18:47:04.086385    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:04.086388    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:04.086760    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:04.583503    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:04.583526    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:04.583536    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:04.583543    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:04.586213    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:04.586226    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:04.586234    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:04.586242    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:04.586247    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:04.586252    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:04.586258    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:04 GMT
	I0728 18:47:04.586265    4673 round_trippers.go:580]     Audit-Id: 5e772194-7b0c-4238-ad13-1965058f1e80
	I0728 18:47:04.586509    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:05.083929    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:05.083955    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:05.084062    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:05.084075    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:05.086388    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:05.086399    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:05.086406    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:05 GMT
	I0728 18:47:05.086410    4673 round_trippers.go:580]     Audit-Id: 48ccfbdf-f1af-4a34-9739-ca888d40d18d
	I0728 18:47:05.086414    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:05.086418    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:05.086423    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:05.086427    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:05.086672    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:05.583613    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:05.583640    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:05.583681    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:05.583694    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:05.586237    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:05.586250    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:05.586257    4673 round_trippers.go:580]     Audit-Id: e7b11eac-026b-49e8-af17-fd8c3bed843a
	I0728 18:47:05.586261    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:05.586266    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:05.586271    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:05.586278    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:05.586285    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:05 GMT
	I0728 18:47:05.586500    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:06.082733    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:06.082760    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:06.082769    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:06.082772    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:06.084557    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:06.084565    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:06.084570    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:06 GMT
	I0728 18:47:06.084573    4673 round_trippers.go:580]     Audit-Id: fc198dac-3f3d-4556-91d3-121b753a1ba0
	I0728 18:47:06.084576    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:06.084580    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:06.084584    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:06.084587    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:06.084754    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:06.084918    4673 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:47:06.583526    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:06.583559    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:06.583573    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:06.583579    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:06.586174    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:06.586195    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:06.586203    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:06.586208    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:06 GMT
	I0728 18:47:06.586213    4673 round_trippers.go:580]     Audit-Id: 19b104e3-3cb4-493d-9ca1-79028198dcff
	I0728 18:47:06.586217    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:06.586230    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:06.586235    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:06.586313    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:07.082627    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:07.082653    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:07.082664    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:07.082671    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:07.085339    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:07.085354    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:07.085360    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:07.085364    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:07.085368    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:07 GMT
	I0728 18:47:07.085376    4673 round_trippers.go:580]     Audit-Id: a236a5a6-2ec6-4ada-a8c6-15c1e07ab613
	I0728 18:47:07.085380    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:07.085386    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:07.085456    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:07.583561    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:07.583588    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:07.583600    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:07.583606    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:07.586272    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:07.586292    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:07.586299    4673 round_trippers.go:580]     Audit-Id: 3ea443e1-c9f4-4c9c-ac6b-d6bcc8ce04cd
	I0728 18:47:07.586304    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:07.586310    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:07.586314    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:07.586318    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:07.586321    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:07 GMT
	I0728 18:47:07.586385    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:08.082842    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:08.082867    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:08.082877    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:08.082882    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:08.085218    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:08.085230    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:08.085237    4673 round_trippers.go:580]     Audit-Id: b52fb803-933c-4628-affa-c6866ccbd1da
	I0728 18:47:08.085251    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:08.085258    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:08.085264    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:08.085269    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:08.085276    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:08 GMT
	I0728 18:47:08.085461    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:08.085674    4673 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:47:08.583592    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:08.583610    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:08.583617    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:08.583622    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:08.585747    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:08.585757    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:08.585770    4673 round_trippers.go:580]     Audit-Id: 05ed2d15-b1ed-43c1-a795-249105341cb1
	I0728 18:47:08.585775    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:08.585781    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:08.585784    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:08.585787    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:08.585790    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:08 GMT
	I0728 18:47:08.586064    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:09.082069    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:09.082095    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:09.082107    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:09.082122    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:09.084654    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:09.084667    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:09.084674    4673 round_trippers.go:580]     Audit-Id: 38a2e509-bfae-440c-a13d-9b0670664c44
	I0728 18:47:09.084682    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:09.084687    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:09.084693    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:09.084700    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:09.084706    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:09 GMT
	I0728 18:47:09.084772    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1072","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3673 chars]
	I0728 18:47:09.583552    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:09.583581    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:09.583594    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:09.583681    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:09.586191    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:09.586206    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:09.586213    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:09.586217    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:09 GMT
	I0728 18:47:09.586222    4673 round_trippers.go:580]     Audit-Id: 244e93c2-0ae9-43df-a5c8-07133b904a24
	I0728 18:47:09.586255    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:09.586264    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:09.586274    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:09.586367    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:10.083269    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:10.083370    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:10.083384    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:10.083391    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:10.085996    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:10.086015    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:10.086023    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:10.086028    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:10.086032    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:10 GMT
	I0728 18:47:10.086063    4673 round_trippers.go:580]     Audit-Id: d68a320d-bf05-4f48-a789-117a8e33b47b
	I0728 18:47:10.086074    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:10.086081    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:10.086229    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:10.086450    4673 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:47:10.583491    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:10.583518    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:10.583530    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:10.583536    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:10.586279    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:10.586292    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:10.586299    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:10.586304    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:10.586308    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:10.586311    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:10 GMT
	I0728 18:47:10.586316    4673 round_trippers.go:580]     Audit-Id: 1e516778-0609-427d-9b2d-94936c11d2b3
	I0728 18:47:10.586320    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:10.586718    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:11.083441    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:11.083460    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:11.083468    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:11.083471    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:11.085507    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:11.085521    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:11.085527    4673 round_trippers.go:580]     Audit-Id: bfbe1c79-2000-473a-8d45-9dd4cfa52187
	I0728 18:47:11.085535    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:11.085538    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:11.085540    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:11.085543    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:11.085546    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:11 GMT
	I0728 18:47:11.085653    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:11.583494    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:11.583522    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:11.583533    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:11.583539    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:11.586432    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:11.586447    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:11.586455    4673 round_trippers.go:580]     Audit-Id: ea4443b5-8768-4649-90e3-04c255fdd021
	I0728 18:47:11.586458    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:11.586462    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:11.586465    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:11.586470    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:11.586473    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:11 GMT
	I0728 18:47:11.586608    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:12.083737    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:12.083757    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:12.083798    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:12.083805    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:12.085657    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:12.085666    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:12.085683    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:12.085696    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:12 GMT
	I0728 18:47:12.085700    4673 round_trippers.go:580]     Audit-Id: 24ca3df1-827e-402f-9e1d-7153d754fe03
	I0728 18:47:12.085704    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:12.085707    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:12.085725    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:12.085836    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:12.583551    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:12.583585    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:12.583596    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:12.583602    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:12.586276    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:12.586292    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:12.586300    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:12.586306    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:12 GMT
	I0728 18:47:12.586309    4673 round_trippers.go:580]     Audit-Id: ac02a110-f61d-43a7-a2d9-2e8deb40894a
	I0728 18:47:12.586313    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:12.586317    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:12.586320    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:12.586393    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:12.586620    4673 node_ready.go:53] node "multinode-362000-m02" has status "Ready":"False"
	I0728 18:47:13.083427    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:13.083462    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:13.083473    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:13.083481    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:13.086185    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:13.086200    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:13.086208    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:13.086214    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:13.086223    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:13 GMT
	I0728 18:47:13.086236    4673 round_trippers.go:580]     Audit-Id: 1ecb1c00-20b1-4739-b671-ef5f0f726f67
	I0728 18:47:13.086246    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:13.086259    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:13.086342    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:13.583552    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:13.583576    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:13.583588    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:13.583596    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:13.586285    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:13.586298    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:13.586305    4673 round_trippers.go:580]     Audit-Id: 63540feb-66f6-472f-ae28-ec3f5b163290
	I0728 18:47:13.586310    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:13.586313    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:13.586317    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:13.586321    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:13.586325    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:13 GMT
	I0728 18:47:13.586595    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:14.082829    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:14.082854    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.082866    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.082876    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.085357    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:14.085370    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.085377    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.085382    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.085387    4673 round_trippers.go:580]     Audit-Id: 07158bf9-c1df-41b3-875c-270749eaf52a
	I0728 18:47:14.085402    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.085409    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.085415    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.085729    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1103","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 4065 chars]
	I0728 18:47:14.583527    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:14.583550    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.583561    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.583567    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.586127    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:14.586138    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.586146    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.586151    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.586161    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.586165    4673 round_trippers.go:580]     Audit-Id: a1dd98f0-ae96-4083-82f9-ae54c771a321
	I0728 18:47:14.586169    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.586172    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.586777    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1113","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0728 18:47:14.587002    4673 node_ready.go:49] node "multinode-362000-m02" has status "Ready":"True"
	I0728 18:47:14.587013    4673 node_ready.go:38] duration metric: took 15.005213007s for node "multinode-362000-m02" to be "Ready" ...
	I0728 18:47:14.587020    4673 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:47:14.587063    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0728 18:47:14.587071    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.587078    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.587087    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.589399    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:14.589411    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.589417    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.589420    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.589437    4673 round_trippers.go:580]     Audit-Id: 27366e48-0fac-407c-8309-4d2b8e5d873e
	I0728 18:47:14.589444    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.589447    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.589450    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.590265    4673 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1115"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"1001","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86442 chars]
	I0728 18:47:14.592159    4673 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.592195    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8npcw
	I0728 18:47:14.592200    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.592206    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.592210    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.593323    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:14.593331    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.593336    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.593341    4673 round_trippers.go:580]     Audit-Id: 40e8a029-e8e7-442f-a012-29763697b332
	I0728 18:47:14.593348    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.593352    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.593354    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.593359    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.593532    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8npcw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a0fcbb6f-1182-4d9e-bc04-456f1b4de1db","resourceVersion":"1001","creationTimestamp":"2024-07-29T01:40:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"879c0639-20bf-4a87-a0f1-438b766557d6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879c0639-20bf-4a87-a0f1-438b766557d6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0728 18:47:14.593765    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:47:14.593771    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.593777    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.593780    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.594753    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:47:14.594762    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.594769    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.594774    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.594778    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.594782    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.594805    4673 round_trippers.go:580]     Audit-Id: 54313934-94d7-4b70-b561-5005190065d9
	I0728 18:47:14.594815    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.594983    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:47:14.595169    4673 pod_ready.go:92] pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace has status "Ready":"True"
	I0728 18:47:14.595179    4673 pod_ready.go:81] duration metric: took 3.009773ms for pod "coredns-7db6d8ff4d-8npcw" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.595185    4673 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.595220    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-362000
	I0728 18:47:14.595225    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.595230    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.595235    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.596229    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:47:14.596236    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.596243    4673 round_trippers.go:580]     Audit-Id: 8bd93089-422c-4ac2-881d-32fff2f3827d
	I0728 18:47:14.596249    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.596253    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.596258    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.596262    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.596266    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.596373    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-362000","namespace":"kube-system","uid":"7b75e781-36f1-4f6f-99a4-808974571bcd","resourceVersion":"971","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.mirror":"652ae4c52430ecf70f417085f8ca8007","kubernetes.io/config.seen":"2024-07-29T01:39:56.230156002Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6357 chars]
	I0728 18:47:14.596577    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:47:14.596583    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.596589    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.596591    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.597598    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:14.597606    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.597611    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.597620    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.597623    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.597626    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.597628    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.597632    4673 round_trippers.go:580]     Audit-Id: 3f664d18-6652-4750-97a6-c67ed0e633ee
	I0728 18:47:14.597727    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:47:14.597887    4673 pod_ready.go:92] pod "etcd-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:47:14.597896    4673 pod_ready.go:81] duration metric: took 2.707171ms for pod "etcd-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.597906    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.597934    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-362000
	I0728 18:47:14.597941    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.597947    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.597952    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.598976    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:14.598984    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.598989    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.598992    4673 round_trippers.go:580]     Audit-Id: fbbbe501-e880-49f1-8f56-53581a7896c6
	I0728 18:47:14.598996    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.599000    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.599006    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.599009    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.599114    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-362000","namespace":"kube-system","uid":"95b0fc9b-aad1-47ad-ae00-439b4e4b905a","resourceVersion":"961","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.mirror":"79a18d82eaa15eb8ff11e00b763169d7","kubernetes.io/config.seen":"2024-07-29T01:39:56.230158669Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0728 18:47:14.599381    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:47:14.599388    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.599394    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.599397    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.600352    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:47:14.600359    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.600363    4673 round_trippers.go:580]     Audit-Id: 6cf14d7d-8f78-4b11-ad0c-ed366d6ea160
	I0728 18:47:14.600366    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.600369    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.600373    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.600379    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.600382    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.600485    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:47:14.600659    4673 pod_ready.go:92] pod "kube-apiserver-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:47:14.600667    4673 pod_ready.go:81] duration metric: took 2.755614ms for pod "kube-apiserver-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.600673    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.600709    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-362000
	I0728 18:47:14.600714    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.600719    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.600721    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.601710    4673 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0728 18:47:14.601717    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.601722    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.601725    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.601732    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.601737    4673 round_trippers.go:580]     Audit-Id: f3a3e678-dec2-4d3e-9d31-58710e541dbb
	I0728 18:47:14.601740    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.601742    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.601897    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-362000","namespace":"kube-system","uid":"5a6ca54d-e3db-4e1f-a7e0-ceb52dfecdb9","resourceVersion":"969","creationTimestamp":"2024-07-29T01:39:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.mirror":"022d1af18783ba93c73769e777010f0c","kubernetes.io/config.seen":"2024-07-29T01:39:56.230159555Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0728 18:47:14.602126    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:47:14.602133    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.602139    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.602143    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.603211    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:14.603217    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.603221    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.603225    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.603227    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.603229    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.603231    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.603233    4673 round_trippers.go:580]     Audit-Id: 8ae08f39-b644-4897-b6aa-938523cee4a0
	I0728 18:47:14.603401    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:47:14.603568    4673 pod_ready.go:92] pod "kube-controller-manager-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:47:14.603576    4673 pod_ready.go:81] duration metric: took 2.898089ms for pod "kube-controller-manager-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.603581    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7gm24" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:14.783734    4673 request.go:629] Waited for 180.090282ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7gm24
	I0728 18:47:14.783891    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7gm24
	I0728 18:47:14.783901    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.783912    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.783922    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.786224    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:14.786238    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.786245    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.786251    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.786259    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.786265    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:14 GMT
	I0728 18:47:14.786271    4673 round_trippers.go:580]     Audit-Id: 03b73c1d-0925-421d-be62-4e2d5cededf8
	I0728 18:47:14.786275    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.786383    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7gm24","generateName":"kube-proxy-","namespace":"kube-system","uid":"9db42267-b01f-40a3-bf21-c4d8cf6fb372","resourceVersion":"1030","creationTimestamp":"2024-07-29T01:44:55Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0728 18:47:14.985033    4673 request.go:629] Waited for 198.214721ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m03
	I0728 18:47:14.985099    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m03
	I0728 18:47:14.985110    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:14.985123    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:14.985129    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:14.987719    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:14.987734    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:14.987741    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:14.987745    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:14.987750    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:14.987754    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:15 GMT
	I0728 18:47:14.987757    4673 round_trippers.go:580]     Audit-Id: b70ff3a0-6e2b-45a6-9db5-c40b69093c47
	I0728 18:47:14.987760    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:14.987828    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m03","uid":"f2047331-d0da-470e-8da5-7b725a7d5c49","resourceVersion":"1102","creationTimestamp":"2024-07-29T01:44:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_44_56_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3899 chars]
	I0728 18:47:14.988052    4673 pod_ready.go:97] node "multinode-362000-m03" hosting pod "kube-proxy-7gm24" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000-m03" has status "Ready":"Unknown"
	I0728 18:47:14.988066    4673 pod_ready.go:81] duration metric: took 384.481958ms for pod "kube-proxy-7gm24" in "kube-system" namespace to be "Ready" ...
	E0728 18:47:14.988078    4673 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-362000-m03" hosting pod "kube-proxy-7gm24" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-362000-m03" has status "Ready":"Unknown"
	I0728 18:47:14.988084    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dzz6p" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:15.183547    4673 request.go:629] Waited for 195.401729ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzz6p
	I0728 18:47:15.183691    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzz6p
	I0728 18:47:15.183702    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:15.183713    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:15.183720    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:15.186493    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:15.186508    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:15.186516    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:15 GMT
	I0728 18:47:15.186520    4673 round_trippers.go:580]     Audit-Id: 4be157c9-28d3-49c3-be32-55e7eb564fe5
	I0728 18:47:15.186523    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:15.186527    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:15.186533    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:15.186536    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:15.186618    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dzz6p","generateName":"kube-proxy-","namespace":"kube-system","uid":"577d6ba2-e17a-426f-8315-1688766fa435","resourceVersion":"1089","creationTimestamp":"2024-07-29T01:40:51Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0728 18:47:15.383495    4673 request.go:629] Waited for 196.542391ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:15.383593    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000-m02
	I0728 18:47:15.383598    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:15.383604    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:15.383609    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:15.385518    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:15.385529    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:15.385537    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:15.385543    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:15.385547    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:15.385551    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:15 GMT
	I0728 18:47:15.385554    4673 round_trippers.go:580]     Audit-Id: 2b4b94f0-b194-4608-b7a9-f754c84b1ca7
	I0728 18:47:15.385557    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:15.385632    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000-m02","uid":"4a4154e8-b960-4ea1-99e3-c2d322f4b764","resourceVersion":"1113","creationTimestamp":"2024-07-29T01:46:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_28T18_46_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:46:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0728 18:47:15.385808    4673 pod_ready.go:92] pod "kube-proxy-dzz6p" in "kube-system" namespace has status "Ready":"True"
	I0728 18:47:15.385816    4673 pod_ready.go:81] duration metric: took 397.729489ms for pod "kube-proxy-dzz6p" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:15.385842    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:15.584019    4673 request.go:629] Waited for 198.118502ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:47:15.584187    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tz5h5
	I0728 18:47:15.584198    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:15.584209    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:15.584217    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:15.587046    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:15.587066    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:15.587077    4673 round_trippers.go:580]     Audit-Id: 75623fe3-4ec1-4c1e-aed7-e359acc02add
	I0728 18:47:15.587084    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:15.587091    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:15.587099    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:15.587105    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:15.587110    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:15 GMT
	I0728 18:47:15.587276    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tz5h5","generateName":"kube-proxy-","namespace":"kube-system","uid":"f791f783-464c-485b-9eda-97a5f857cca4","resourceVersion":"974","creationTimestamp":"2024-07-29T01:40:09Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c4280f33-d710-483a-8730-b80781f1fcef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:40:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4280f33-d710-483a-8730-b80781f1fcef\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0728 18:47:15.784628    4673 request.go:629] Waited for 196.977316ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:47:15.784787    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:47:15.784798    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:15.784814    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:15.784823    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:15.787228    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:15.787240    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:15.787247    4673 round_trippers.go:580]     Audit-Id: f881ee1b-7ab6-4aca-9d61-24a9a01a3e6b
	I0728 18:47:15.787251    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:15.787258    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:15.787262    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:15.787266    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:15.787268    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:15 GMT
	I0728 18:47:15.787418    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:47:15.787659    4673 pod_ready.go:92] pod "kube-proxy-tz5h5" in "kube-system" namespace has status "Ready":"True"
	I0728 18:47:15.787677    4673 pod_ready.go:81] duration metric: took 401.821881ms for pod "kube-proxy-tz5h5" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:15.787691    4673 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:15.983559    4673 request.go:629] Waited for 195.822935ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:47:15.983705    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-362000
	I0728 18:47:15.983717    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:15.983728    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:15.983735    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:15.987299    4673 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 18:47:15.987313    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:15.987321    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:16 GMT
	I0728 18:47:15.987326    4673 round_trippers.go:580]     Audit-Id: 423ac02c-f784-439d-afd9-1211747620f0
	I0728 18:47:15.987330    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:15.987334    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:15.987338    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:15.987341    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:15.987461    4673 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-362000","namespace":"kube-system","uid":"0299d0c0-d45d-45ee-9b8e-b5900e92694b","resourceVersion":"970","creationTimestamp":"2024-07-29T01:39:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.mirror":"fd4f6a755599b49b9ab3b0e30ce28d43","kubernetes.io/config.seen":"2024-07-29T01:39:50.867492603Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-29T01:39:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0728 18:47:16.184561    4673 request.go:629] Waited for 196.795647ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:47:16.184632    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-362000
	I0728 18:47:16.184641    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:16.184649    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:16.184653    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:16.186556    4673 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 18:47:16.186566    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:16.186572    4673 round_trippers.go:580]     Audit-Id: 06ab980b-4618-4bf8-8e74-003110516b4c
	I0728 18:47:16.186580    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:16.186584    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:16.186586    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:16.186588    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:16.186591    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:16 GMT
	I0728 18:47:16.186745    4673 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-29T01:39:54Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0728 18:47:16.186963    4673 pod_ready.go:92] pod "kube-scheduler-multinode-362000" in "kube-system" namespace has status "Ready":"True"
	I0728 18:47:16.186976    4673 pod_ready.go:81] duration metric: took 399.277206ms for pod "kube-scheduler-multinode-362000" in "kube-system" namespace to be "Ready" ...
	I0728 18:47:16.186983    4673 pod_ready.go:38] duration metric: took 1.599967105s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 18:47:16.186996    4673 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 18:47:16.187051    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:47:16.198623    4673 system_svc.go:56] duration metric: took 11.624441ms WaitForService to wait for kubelet
	I0728 18:47:16.198637    4673 kubeadm.go:582] duration metric: took 16.793260637s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:47:16.198652    4673 node_conditions.go:102] verifying NodePressure condition ...
	I0728 18:47:16.383957    4673 request.go:629] Waited for 185.226784ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0728 18:47:16.384059    4673 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0728 18:47:16.384070    4673 round_trippers.go:469] Request Headers:
	I0728 18:47:16.384082    4673 round_trippers.go:473]     Accept: application/json, */*
	I0728 18:47:16.384102    4673 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 18:47:16.386872    4673 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 18:47:16.386887    4673 round_trippers.go:577] Response Headers:
	I0728 18:47:16.386893    4673 round_trippers.go:580]     Audit-Id: 907b1c76-0ee3-463e-bfb3-31e9378c32f1
	I0728 18:47:16.386898    4673 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 18:47:16.386902    4673 round_trippers.go:580]     Content-Type: application/json
	I0728 18:47:16.386907    4673 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 464b0281-71b7-4d7b-8e05-c3bc9d1d1bc5
	I0728 18:47:16.386910    4673 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f451b95d-3c56-4d20-a668-4727d84492bf
	I0728 18:47:16.386913    4673 round_trippers.go:580]     Date: Mon, 29 Jul 2024 01:47:16 GMT
	I0728 18:47:16.387073    4673 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1116"},"items":[{"metadata":{"name":"multinode-362000","uid":"31d2da43-85a5-44a9-b5a3-c2989cd4e93a","resourceVersion":"981","creationTimestamp":"2024-07-29T01:39:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-362000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"608d90af2517e2ec0044e62b20376f40276621a1","minikube.k8s.io/name":"multinode-362000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_28T18_39_57_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 15041 chars]
	I0728 18:47:16.387619    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:47:16.387630    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:47:16.387638    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:47:16.387643    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:47:16.387647    4673 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0728 18:47:16.387651    4673 node_conditions.go:123] node cpu capacity is 2
	I0728 18:47:16.387655    4673 node_conditions.go:105] duration metric: took 188.998592ms to run NodePressure ...
	I0728 18:47:16.387664    4673 start.go:241] waiting for startup goroutines ...
	I0728 18:47:16.387687    4673 start.go:255] writing updated cluster config ...
	I0728 18:47:16.409997    4673 out.go:177] 
	I0728 18:47:16.432381    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:47:16.432513    4673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:47:16.455885    4673 out.go:177] * Starting "multinode-362000-m03" worker node in "multinode-362000" cluster
	I0728 18:47:16.497765    4673 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:47:16.497798    4673 cache.go:56] Caching tarball of preloaded images
	I0728 18:47:16.497969    4673 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 18:47:16.497987    4673 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:47:16.498110    4673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:47:16.499231    4673 start.go:360] acquireMachinesLock for multinode-362000-m03: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:47:16.499326    4673 start.go:364] duration metric: took 72.314µs to acquireMachinesLock for "multinode-362000-m03"
	I0728 18:47:16.499361    4673 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:47:16.499368    4673 fix.go:54] fixHost starting: m03
	I0728 18:47:16.499775    4673 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:47:16.499793    4673 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:47:16.508921    4673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52914
	I0728 18:47:16.509260    4673 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:47:16.509603    4673 main.go:141] libmachine: Using API Version  1
	I0728 18:47:16.509619    4673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:47:16.509824    4673 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:47:16.509940    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:47:16.510032    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetState
	I0728 18:47:16.510105    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:47:16.510192    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | hyperkit pid from json: 4633
	I0728 18:47:16.511099    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | hyperkit pid 4633 missing from process table
	I0728 18:47:16.511123    4673 fix.go:112] recreateIfNeeded on multinode-362000-m03: state=Stopped err=<nil>
	I0728 18:47:16.511131    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	W0728 18:47:16.511218    4673 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:47:16.532741    4673 out.go:177] * Restarting existing hyperkit VM for "multinode-362000-m03" ...
	I0728 18:47:16.574660    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .Start
	I0728 18:47:16.574958    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:47:16.574986    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/hyperkit.pid
	I0728 18:47:16.575072    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | Using UUID 5cda4f36-38f7-4c06-808b-dbe144e26e44
	I0728 18:47:16.603696    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | Generated MAC 3e:8b:c4:58:a6:30
	I0728 18:47:16.603718    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000
	I0728 18:47:16.603879    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5cda4f36-38f7-4c06-808b-dbe144e26e44", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ab590)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0728 18:47:16.603919    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5cda4f36-38f7-4c06-808b-dbe144e26e44", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ab590)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0728 18:47:16.603977    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5cda4f36-38f7-4c06-808b-dbe144e26e44", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/multinode-362000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/bzimage,/Users/j
enkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"}
	I0728 18:47:16.604012    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5cda4f36-38f7-4c06-808b-dbe144e26e44 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/multinode-362000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/mult
inode-362000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-362000"
	I0728 18:47:16.604024    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 18:47:16.605431    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 DEBUG: hyperkit: Pid is 4703
	I0728 18:47:16.605787    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | Attempt 0
	I0728 18:47:16.605799    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:47:16.605867    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | hyperkit pid from json: 4703
	I0728 18:47:16.606925    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | Searching for 3e:8b:c4:58:a6:30 in /var/db/dhcpd_leases ...
	I0728 18:47:16.606996    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0728 18:47:16.607012    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:6:55:c7:17:95:12 ID:1,6:55:c7:17:95:12 Lease:0x66a84606}
	I0728 18:47:16.607042    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e:8c:86:9:55:cf ID:1,e:8c:86:9:55:cf Lease:0x66a845cb}
	I0728 18:47:16.607066    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:3e:8b:c4:58:a6:30 ID:1,3e:8b:c4:58:a6:30 Lease:0x66a6f430}
	I0728 18:47:16.607077    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | Found match: 3e:8b:c4:58:a6:30
	I0728 18:47:16.607087    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | IP: 192.169.0.15
	I0728 18:47:16.607106    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetConfigRaw
	I0728 18:47:16.607808    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetIP
	I0728 18:47:16.607986    4673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/multinode-362000/config.json ...
	I0728 18:47:16.608522    4673 machine.go:94] provisionDockerMachine start ...
	I0728 18:47:16.608533    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:47:16.608656    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:16.608781    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:16.608912    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:16.609013    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:16.609122    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:16.609255    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:47:16.609415    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:47:16.609422    4673 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 18:47:16.613511    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 18:47:16.622762    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 18:47:16.623774    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:47:16.623792    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:47:16.623801    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:47:16.623812    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:47:17.008257    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 18:47:17.008273    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 18:47:17.123001    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:47:17.123021    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:47:17.123037    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:47:17.123048    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:47:17.123860    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 18:47:17.123871    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 18:47:22.750018    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:22 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0728 18:47:22.750121    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:22 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0728 18:47:22.750132    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:22 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0728 18:47:22.774448    4673 main.go:141] libmachine: (multinode-362000-m03) DBG | 2024/07/28 18:47:22 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0728 18:47:51.682863    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0728 18:47:51.682877    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetMachineName
	I0728 18:47:51.683017    4673 buildroot.go:166] provisioning hostname "multinode-362000-m03"
	I0728 18:47:51.683029    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetMachineName
	I0728 18:47:51.683125    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:51.683238    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:51.683325    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:51.683424    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:51.683514    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:51.683649    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:47:51.683794    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:47:51.683802    4673 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-362000-m03 && echo "multinode-362000-m03" | sudo tee /etc/hostname
	I0728 18:47:51.758129    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-362000-m03
	
	I0728 18:47:51.758143    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:51.758275    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:51.758370    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:51.758462    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:51.758558    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:51.758703    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:47:51.758849    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:47:51.758861    4673 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-362000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-362000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-362000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 18:47:51.831272    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:47:51.831287    4673 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 18:47:51.831295    4673 buildroot.go:174] setting up certificates
	I0728 18:47:51.831313    4673 provision.go:84] configureAuth start
	I0728 18:47:51.831320    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetMachineName
	I0728 18:47:51.831485    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetIP
	I0728 18:47:51.831587    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:51.831665    4673 provision.go:143] copyHostCerts
	I0728 18:47:51.831695    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:47:51.831749    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 18:47:51.831755    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:47:51.831890    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 18:47:51.832106    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:47:51.832136    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 18:47:51.832140    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:47:51.832279    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 18:47:51.832439    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:47:51.832472    4673 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 18:47:51.832477    4673 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:47:51.832550    4673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 18:47:51.832700    4673 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.multinode-362000-m03 san=[127.0.0.1 192.169.0.15 localhost minikube multinode-362000-m03]
	I0728 18:47:51.967383    4673 provision.go:177] copyRemoteCerts
	I0728 18:47:51.967435    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 18:47:51.967450    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:51.967730    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:51.967885    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:51.967980    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:51.968076    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/id_rsa Username:docker}
	I0728 18:47:52.006868    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 18:47:52.006936    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 18:47:52.026208    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 18:47:52.026293    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0728 18:47:52.045582    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 18:47:52.045646    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 18:47:52.064975    4673 provision.go:87] duration metric: took 233.655646ms to configureAuth
	I0728 18:47:52.064988    4673 buildroot.go:189] setting minikube options for container-runtime
	I0728 18:47:52.065146    4673 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:47:52.065160    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:47:52.065306    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:52.065397    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:52.065469    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:52.065546    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:52.065619    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:52.065732    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:47:52.065859    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:47:52.065866    4673 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 18:47:52.128687    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 18:47:52.128708    4673 buildroot.go:70] root file system type: tmpfs
	I0728 18:47:52.128791    4673 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 18:47:52.128801    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:52.128935    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:52.129020    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:52.129109    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:52.129197    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:52.129347    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:47:52.129507    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:47:52.129552    4673 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	Environment="NO_PROXY=192.169.0.13,192.169.0.14"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 18:47:52.202616    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	Environment=NO_PROXY=192.169.0.13,192.169.0.14
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 18:47:52.202634    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:52.202761    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:52.202854    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:52.202943    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:52.203055    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:52.203193    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:47:52.203331    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:47:52.203343    4673 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 18:47:53.789096    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0728 18:47:53.789113    4673 machine.go:97] duration metric: took 37.180851437s to provisionDockerMachine
	I0728 18:47:53.789121    4673 start.go:293] postStartSetup for "multinode-362000-m03" (driver="hyperkit")
	I0728 18:47:53.789135    4673 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 18:47:53.789145    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:47:53.789333    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 18:47:53.789347    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:53.789452    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:53.789550    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:53.789634    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:53.789730    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/id_rsa Username:docker}
	I0728 18:47:53.828141    4673 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 18:47:53.831204    4673 command_runner.go:130] > NAME=Buildroot
	I0728 18:47:53.831216    4673 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0728 18:47:53.831222    4673 command_runner.go:130] > ID=buildroot
	I0728 18:47:53.831238    4673 command_runner.go:130] > VERSION_ID=2023.02.9
	I0728 18:47:53.831245    4673 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0728 18:47:53.831301    4673 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 18:47:53.831313    4673 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 18:47:53.831397    4673 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 18:47:53.831578    4673 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 18:47:53.831585    4673 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> /etc/ssl/certs/15332.pem
	I0728 18:47:53.831741    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 18:47:53.838987    4673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:47:53.858744    4673 start.go:296] duration metric: took 69.608683ms for postStartSetup
	I0728 18:47:53.858765    4673 fix.go:56] duration metric: took 37.359667921s for fixHost
	I0728 18:47:53.858822    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:53.858947    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:53.859035    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:53.859110    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:53.859188    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:53.859302    4673 main.go:141] libmachine: Using SSH client type: native
	I0728 18:47:53.859439    4673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf7300c0] 0xf732e20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0728 18:47:53.859446    4673 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0728 18:47:53.922478    4673 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722217674.062640264
	
	I0728 18:47:53.922492    4673 fix.go:216] guest clock: 1722217674.062640264
	I0728 18:47:53.922499    4673 fix.go:229] Guest: 2024-07-28 18:47:54.062640264 -0700 PDT Remote: 2024-07-28 18:47:53.858772 -0700 PDT m=+135.476717707 (delta=203.868264ms)
	I0728 18:47:53.922510    4673 fix.go:200] guest clock delta is within tolerance: 203.868264ms
	I0728 18:47:53.922514    4673 start.go:83] releasing machines lock for "multinode-362000-m03", held for 37.423447127s
	I0728 18:47:53.922532    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:47:53.922671    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetIP
	I0728 18:47:53.946229    4673 out.go:177] * Found network options:
	I0728 18:47:53.965869    4673 out.go:177]   - NO_PROXY=192.169.0.13,192.169.0.14
	W0728 18:47:53.987057    4673 proxy.go:119] fail to check proxy env: Error ip not in block
	W0728 18:47:53.987087    4673 proxy.go:119] fail to check proxy env: Error ip not in block
	I0728 18:47:53.987107    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:47:53.987843    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:47:53.988040    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .DriverName
	I0728 18:47:53.988154    4673 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 18:47:53.988193    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	W0728 18:47:53.988228    4673 proxy.go:119] fail to check proxy env: Error ip not in block
	W0728 18:47:53.988251    4673 proxy.go:119] fail to check proxy env: Error ip not in block
	I0728 18:47:53.988346    4673 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0728 18:47:53.988398    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHHostname
	I0728 18:47:53.988414    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:53.988612    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHPort
	I0728 18:47:53.988640    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:53.988801    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHKeyPath
	I0728 18:47:53.988823    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:53.989022    4673 main.go:141] libmachine: (multinode-362000-m03) Calling .GetSSHUsername
	I0728 18:47:53.989023    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/id_rsa Username:docker}
	I0728 18:47:53.989164    4673 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m03/id_rsa Username:docker}
	I0728 18:47:54.023890    4673 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0728 18:47:54.024043    4673 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 18:47:54.024097    4673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 18:47:54.074254    4673 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0728 18:47:54.074369    4673 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0728 18:47:54.074394    4673 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 18:47:54.074406    4673 start.go:495] detecting cgroup driver to use...
	I0728 18:47:54.074508    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:47:54.089984    4673 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0728 18:47:54.090233    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0728 18:47:54.098518    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 18:47:54.106701    4673 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 18:47:54.106755    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 18:47:54.115138    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:47:54.123668    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 18:47:54.131990    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:47:54.140587    4673 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 18:47:54.149140    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 18:47:54.157581    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 18:47:54.166150    4673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 18:47:54.174714    4673 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 18:47:54.182559    4673 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0728 18:47:54.182644    4673 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 18:47:54.190271    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:47:54.298866    4673 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 18:47:54.318067    4673 start.go:495] detecting cgroup driver to use...
	I0728 18:47:54.318140    4673 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 18:47:54.338026    4673 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0728 18:47:54.338581    4673 command_runner.go:130] > [Unit]
	I0728 18:47:54.338591    4673 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 18:47:54.338596    4673 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 18:47:54.338601    4673 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0728 18:47:54.338605    4673 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0728 18:47:54.338609    4673 command_runner.go:130] > StartLimitBurst=3
	I0728 18:47:54.338612    4673 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 18:47:54.338620    4673 command_runner.go:130] > [Service]
	I0728 18:47:54.338625    4673 command_runner.go:130] > Type=notify
	I0728 18:47:54.338628    4673 command_runner.go:130] > Restart=on-failure
	I0728 18:47:54.338632    4673 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0728 18:47:54.338636    4673 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13,192.169.0.14
	I0728 18:47:54.338642    4673 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 18:47:54.338650    4673 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 18:47:54.338656    4673 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 18:47:54.338662    4673 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 18:47:54.338668    4673 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 18:47:54.338673    4673 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 18:47:54.338683    4673 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 18:47:54.338690    4673 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 18:47:54.338695    4673 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 18:47:54.338698    4673 command_runner.go:130] > ExecStart=
	I0728 18:47:54.338710    4673 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0728 18:47:54.338716    4673 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 18:47:54.338726    4673 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 18:47:54.338745    4673 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 18:47:54.338752    4673 command_runner.go:130] > LimitNOFILE=infinity
	I0728 18:47:54.338756    4673 command_runner.go:130] > LimitNPROC=infinity
	I0728 18:47:54.338760    4673 command_runner.go:130] > LimitCORE=infinity
	I0728 18:47:54.338765    4673 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 18:47:54.338769    4673 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 18:47:54.338773    4673 command_runner.go:130] > TasksMax=infinity
	I0728 18:47:54.338782    4673 command_runner.go:130] > TimeoutStartSec=0
	I0728 18:47:54.338789    4673 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 18:47:54.338792    4673 command_runner.go:130] > Delegate=yes
	I0728 18:47:54.338803    4673 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 18:47:54.338807    4673 command_runner.go:130] > KillMode=process
	I0728 18:47:54.338809    4673 command_runner.go:130] > [Install]
	I0728 18:47:54.338813    4673 command_runner.go:130] > WantedBy=multi-user.target
	I0728 18:47:54.338880    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:47:54.349724    4673 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 18:47:54.369917    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:47:54.380285    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:47:54.390909    4673 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 18:47:54.414303    4673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:47:54.425462    4673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:47:54.439971    4673 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 18:47:54.440174    4673 ssh_runner.go:195] Run: which cri-dockerd
	I0728 18:47:54.442948    4673 command_runner.go:130] > /usr/bin/cri-dockerd
	I0728 18:47:54.443108    4673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 18:47:54.450126    4673 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 18:47:54.463499    4673 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 18:47:54.556646    4673 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 18:47:54.662379    4673 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 18:47:54.662402    4673 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 18:47:54.677242    4673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:47:54.768476    4673 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:48:55.813148    4673 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0728 18:48:55.813163    4673 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0728 18:48:55.813241    4673 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.045192266s)
	I0728 18:48:55.813322    4673 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0728 18:48:55.822236    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0728 18:48:55.822250    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:51.497333097Z" level=info msg="Starting up"
	I0728 18:48:55.822264    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:51.497791961Z" level=info msg="containerd not running, starting managed containerd"
	I0728 18:48:55.822278    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:51.498335029Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	I0728 18:48:55.822288    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.516158090Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0728 18:48:55.822298    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531116014Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0728 18:48:55.822314    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531180338Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0728 18:48:55.822323    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531246321Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0728 18:48:55.822333    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531318847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0728 18:48:55.822344    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531481171Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0728 18:48:55.822353    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531529904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0728 18:48:55.822372    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531657072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 18:48:55.822385    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531697300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0728 18:48:55.822397    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531730875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0728 18:48:55.822407    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531760248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0728 18:48:55.822417    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531885342Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0728 18:48:55.822426    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.532079562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0728 18:48:55.822441    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533663897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0728 18:48:55.822450    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533709153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0728 18:48:55.822590    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533830614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0728 18:48:55.822605    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533871544Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0728 18:48:55.822615    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.534025855Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0728 18:48:55.822624    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.534095225Z" level=info msg="metadata content store policy set" policy=shared
	I0728 18:48:55.822633    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535457940Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0728 18:48:55.822641    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535509819Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0728 18:48:55.822649    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535544130Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0728 18:48:55.822660    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535582591Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0728 18:48:55.822670    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535616821Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0728 18:48:55.822679    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535678991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0728 18:48:55.822688    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535893163Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0728 18:48:55.822697    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535972460Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0728 18:48:55.822706    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536011449Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0728 18:48:55.822716    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536084022Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0728 18:48:55.822726    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536119994Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0728 18:48:55.822738    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536150433Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0728 18:48:55.822748    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536180092Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0728 18:48:55.822757    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536209848Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0728 18:48:55.822768    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536239441Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0728 18:48:55.822777    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536268585Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0728 18:48:55.822890    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536297017Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0728 18:48:55.822902    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536324822Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0728 18:48:55.822911    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536369752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822923    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536404061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822932    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536433648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822940    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536477196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822950    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536515276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822959    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536547653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822968    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536576577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822977    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536605955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822986    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536635251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.822995    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536665832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.823004    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536694177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.823013    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536722442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.823022    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536752762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.823031    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536783569Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0728 18:48:55.823040    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536818503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.823049    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536849022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.823058    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536877256Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0728 18:48:55.823067    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536948425Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0728 18:48:55.823081    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536992137Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0728 18:48:55.823091    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537090826Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0728 18:48:55.823215    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537127999Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0728 18:48:55.823228    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537156657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0728 18:48:55.823241    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537187154Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0728 18:48:55.823249    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537219245Z" level=info msg="NRI interface is disabled by configuration."
	I0728 18:48:55.823258    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537399754Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0728 18:48:55.823266    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537483452Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0728 18:48:55.823274    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537565490Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0728 18:48:55.823282    4673 command_runner.go:130] > Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537629407Z" level=info msg="containerd successfully booted in 0.022253s"
	I0728 18:48:55.823290    4673 command_runner.go:130] > Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.517443604Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0728 18:48:55.823298    4673 command_runner.go:130] > Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.531581234Z" level=info msg="Loading containers: start."
	I0728 18:48:55.823317    4673 command_runner.go:130] > Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.625199277Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0728 18:48:55.823327    4673 command_runner.go:130] > Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.689684132Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0728 18:48:55.823336    4673 command_runner.go:130] > Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.891648560Z" level=info msg="Loading containers: done."
	I0728 18:48:55.823349    4673 command_runner.go:130] > Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.906046920Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	I0728 18:48:55.823357    4673 command_runner.go:130] > Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.906215109Z" level=info msg="Daemon has completed initialization"
	I0728 18:48:55.823366    4673 command_runner.go:130] > Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.927454157Z" level=info msg="API listen on /var/run/docker.sock"
	I0728 18:48:55.823380    4673 command_runner.go:130] > Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.927719311Z" level=info msg="API listen on [::]:2376"
	I0728 18:48:55.823390    4673 command_runner.go:130] > Jul 29 01:47:53 multinode-362000-m03 systemd[1]: Started Docker Application Container Engine.
	I0728 18:48:55.823398    4673 command_runner.go:130] > Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.927200063Z" level=info msg="Processing signal 'terminated'"
	I0728 18:48:55.823409    4673 command_runner.go:130] > Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928060039Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0728 18:48:55.823417    4673 command_runner.go:130] > Jul 29 01:47:54 multinode-362000-m03 systemd[1]: Stopping Docker Application Container Engine...
	I0728 18:48:55.823425    4673 command_runner.go:130] > Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928240054Z" level=info msg="Daemon shutdown complete"
	I0728 18:48:55.823435    4673 command_runner.go:130] > Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928277964Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0728 18:48:55.823465    4673 command_runner.go:130] > Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928289772Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0728 18:48:55.823472    4673 command_runner.go:130] > Jul 29 01:47:55 multinode-362000-m03 systemd[1]: docker.service: Deactivated successfully.
	I0728 18:48:55.823478    4673 command_runner.go:130] > Jul 29 01:47:55 multinode-362000-m03 systemd[1]: Stopped Docker Application Container Engine.
	I0728 18:48:55.823484    4673 command_runner.go:130] > Jul 29 01:47:55 multinode-362000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0728 18:48:55.823491    4673 command_runner.go:130] > Jul 29 01:47:55 multinode-362000-m03 dockerd[848]: time="2024-07-29T01:47:55.965954327Z" level=info msg="Starting up"
	I0728 18:48:55.823501    4673 command_runner.go:130] > Jul 29 01:48:55 multinode-362000-m03 dockerd[848]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0728 18:48:55.823510    4673 command_runner.go:130] > Jul 29 01:48:55 multinode-362000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0728 18:48:55.823528    4673 command_runner.go:130] > Jul 29 01:48:55 multinode-362000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0728 18:48:55.823540    4673 command_runner.go:130] > Jul 29 01:48:55 multinode-362000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	I0728 18:48:55.848031    4673 out.go:177] 
	W0728 18:48:55.868655    4673 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 01:47:51 multinode-362000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:47:51 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:51.497333097Z" level=info msg="Starting up"
	Jul 29 01:47:51 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:51.497791961Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 01:47:51 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:51.498335029Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.516158090Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531116014Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531180338Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531246321Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531318847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531481171Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531529904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531657072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531697300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531730875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531760248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.531885342Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.532079562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533663897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533709153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533830614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.533871544Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.534025855Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.534095225Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535457940Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535509819Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535544130Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535582591Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535616821Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535678991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535893163Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.535972460Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536011449Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536084022Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536119994Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536150433Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536180092Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536209848Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536239441Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536268585Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536297017Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536324822Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536369752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536404061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536433648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536477196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536515276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536547653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536576577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536605955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536635251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536665832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536694177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536722442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536752762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536783569Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536818503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536849022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536877256Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536948425Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.536992137Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537090826Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537127999Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537156657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537187154Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537219245Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537399754Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537483452Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537565490Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 01:47:51 multinode-362000-m03 dockerd[518]: time="2024-07-29T01:47:51.537629407Z" level=info msg="containerd successfully booted in 0.022253s"
	Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.517443604Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.531581234Z" level=info msg="Loading containers: start."
	Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.625199277Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 01:47:52 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:52.689684132Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.891648560Z" level=info msg="Loading containers: done."
	Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.906046920Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.906215109Z" level=info msg="Daemon has completed initialization"
	Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.927454157Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 01:47:53 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:53.927719311Z" level=info msg="API listen on [::]:2376"
	Jul 29 01:47:53 multinode-362000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.927200063Z" level=info msg="Processing signal 'terminated'"
	Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928060039Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 01:47:54 multinode-362000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928240054Z" level=info msg="Daemon shutdown complete"
	Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928277964Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 01:47:54 multinode-362000-m03 dockerd[512]: time="2024-07-29T01:47:54.928289772Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 01:47:55 multinode-362000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 01:47:55 multinode-362000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 01:47:55 multinode-362000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:47:55 multinode-362000-m03 dockerd[848]: time="2024-07-29T01:47:55.965954327Z" level=info msg="Starting up"
	Jul 29 01:48:55 multinode-362000-m03 dockerd[848]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 01:48:55 multinode-362000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 01:48:55 multinode-362000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 01:48:55 multinode-362000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0728 18:48:55.868782    4673 out.go:239] * 
	W0728 18:48:55.870089    4673 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:48:55.931441    4673 out.go:177] 
	
	
	==> Docker <==
	Jul 29 01:46:32 multinode-362000 dockerd[912]: time="2024-07-29T01:46:32.918386471Z" level=info msg="ignoring event" container=bf4cf04d618777ab9b361d3927a924bbebddc7fd8578fa27d82932dc19c2af55 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 01:46:32 multinode-362000 dockerd[919]: time="2024-07-29T01:46:32.918744770Z" level=warning msg="cleaning up after shim disconnected" id=bf4cf04d618777ab9b361d3927a924bbebddc7fd8578fa27d82932dc19c2af55 namespace=moby
	Jul 29 01:46:32 multinode-362000 dockerd[919]: time="2024-07-29T01:46:32.918787349Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 01:46:34 multinode-362000 dockerd[919]: time="2024-07-29T01:46:34.274020489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 01:46:34 multinode-362000 dockerd[919]: time="2024-07-29T01:46:34.274066117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 01:46:34 multinode-362000 dockerd[919]: time="2024-07-29T01:46:34.274076809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:46:34 multinode-362000 dockerd[919]: time="2024-07-29T01:46:34.274150725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:46:34 multinode-362000 dockerd[919]: time="2024-07-29T01:46:34.277106244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 01:46:34 multinode-362000 dockerd[919]: time="2024-07-29T01:46:34.277768857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 01:46:34 multinode-362000 dockerd[919]: time="2024-07-29T01:46:34.278246730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:46:34 multinode-362000 dockerd[919]: time="2024-07-29T01:46:34.278426212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:46:34 multinode-362000 cri-dockerd[1166]: time="2024-07-29T01:46:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c8c88691aafc1181ae6fe4252635edc33b1a161be92fa1f9a10d728428c130c9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 29 01:46:34 multinode-362000 cri-dockerd[1166]: time="2024-07-29T01:46:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9e192dd626ab7c43ae9c76b8ec92fc34a4f0dc51a028f7801cb4000bc74e760c/resolv.conf as [nameserver 192.169.0.1]"
	Jul 29 01:46:34 multinode-362000 dockerd[919]: time="2024-07-29T01:46:34.519453823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 01:46:34 multinode-362000 dockerd[919]: time="2024-07-29T01:46:34.519615436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 01:46:34 multinode-362000 dockerd[919]: time="2024-07-29T01:46:34.519633811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:46:34 multinode-362000 dockerd[919]: time="2024-07-29T01:46:34.519862848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:46:34 multinode-362000 dockerd[919]: time="2024-07-29T01:46:34.546005980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 01:46:34 multinode-362000 dockerd[919]: time="2024-07-29T01:46:34.546063923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 01:46:34 multinode-362000 dockerd[919]: time="2024-07-29T01:46:34.546073355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:46:34 multinode-362000 dockerd[919]: time="2024-07-29T01:46:34.552096760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:46:46 multinode-362000 dockerd[919]: time="2024-07-29T01:46:46.350198028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 01:46:46 multinode-362000 dockerd[919]: time="2024-07-29T01:46:46.350393289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 01:46:46 multinode-362000 dockerd[919]: time="2024-07-29T01:46:46.350415706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:46:46 multinode-362000 dockerd[919]: time="2024-07-29T01:46:46.350559349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	615d643268fd6       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   5651c2bcb0358       storage-provisioner
	a14e21342cc2b       cbb01a7bd410d                                                                                         2 minutes ago       Running             coredns                   1                   9e192dd626ab7       coredns-7db6d8ff4d-8npcw
	06f3db300e5f8       8c811b4aec35f                                                                                         2 minutes ago       Running             busybox                   1                   c8c88691aafc1       busybox-fc5497c4f-8hq8g
	d86bac2cfe90e       6f1d07c71fa0f                                                                                         2 minutes ago       Running             kindnet-cni               1                   8dc29ed65f7f4       kindnet-4mw5v
	bf4cf04d61877       6e38f40d628db                                                                                         2 minutes ago       Exited              storage-provisioner       1                   5651c2bcb0358       storage-provisioner
	c07a07e2fabbb       55bb025d2cfa5                                                                                         2 minutes ago       Running             kube-proxy                1                   a84cc7a02b297       kube-proxy-tz5h5
	45a82e7e6550e       3861cfcd7c04c                                                                                         2 minutes ago       Running             etcd                      1                   3c7036540730d       etcd-multinode-362000
	c277772502d2c       76932a3b37d7e                                                                                         2 minutes ago       Running             kube-controller-manager   1                   fd2cd1c23f1a0       kube-controller-manager-multinode-362000
	4028d2de45061       1f6d574d502f3                                                                                         2 minutes ago       Running             kube-apiserver            1                   40dadecece286       kube-apiserver-multinode-362000
	fa7ef71abab85       3edc18e7b7672                                                                                         2 minutes ago       Running             kube-scheduler            1                   c2afbe824f75f       kube-scheduler-multinode-362000
	fe2daed37b2f7       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   7 minutes ago       Exited              busybox                   0                   9e1e93dc72426       busybox-fc5497c4f-8hq8g
	4e01b33bc28ce       cbb01a7bd410d                                                                                         8 minutes ago       Exited              coredns                   0                   de282e66d4c05       coredns-7db6d8ff4d-8npcw
	a44317c7df722       kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a              8 minutes ago       Exited              kindnet-cni               0                   a8dcd682eb598       kindnet-4mw5v
	473044afd6a20       55bb025d2cfa5                                                                                         8 minutes ago       Exited              kube-proxy                0                   3050e483a8a8d       kube-proxy-tz5h5
	898c4f8b22692       76932a3b37d7e                                                                                         9 minutes ago       Exited              kube-controller-manager   0                   c5e0cac22c053       kube-controller-manager-multinode-362000
	f4075b746de31       1f6d574d502f3                                                                                         9 minutes ago       Exited              kube-apiserver            0                   1e7d4787a9c38       kube-apiserver-multinode-362000
	ef990ab76809a       3edc18e7b7672                                                                                         9 minutes ago       Exited              kube-scheduler            0                   9bd37faa2f0ae       kube-scheduler-multinode-362000
	e54a6e4f589e1       3861cfcd7c04c                                                                                         9 minutes ago       Exited              etcd                      0                   9ebd1495f3898       etcd-multinode-362000
	
	
	==> coredns [4e01b33bc28c] <==
	[INFO] 10.244.1.2:37359 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000065134s
	[INFO] 10.244.1.2:58343 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075262s
	[INFO] 10.244.1.2:49050 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090366s
	[INFO] 10.244.1.2:53653 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000107571s
	[INFO] 10.244.1.2:56614 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107796s
	[INFO] 10.244.1.2:36768 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092239s
	[INFO] 10.244.1.2:47351 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105143s
	[INFO] 10.244.0.3:57350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000085706s
	[INFO] 10.244.0.3:38330 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000035689s
	[INFO] 10.244.0.3:34046 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005355s
	[INFO] 10.244.0.3:37101 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083044s
	[INFO] 10.244.1.2:35916 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149042s
	[INFO] 10.244.1.2:52331 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100403s
	[INFO] 10.244.1.2:59376 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110433s
	[INFO] 10.244.1.2:54731 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089837s
	[INFO] 10.244.0.3:55981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000054156s
	[INFO] 10.244.0.3:52651 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064795s
	[INFO] 10.244.0.3:44319 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000045378s
	[INFO] 10.244.0.3:47078 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00004451s
	[INFO] 10.244.1.2:41717 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100439s
	[INFO] 10.244.1.2:48492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113445s
	[INFO] 10.244.1.2:34934 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060259s
	[INFO] 10.244.1.2:39620 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000143004s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a14e21342cc2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58216 - 36732 "HINFO IN 8432614316920020235.6084401789246559191. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012537762s
	
	
	==> describe nodes <==
	Name:               multinode-362000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-362000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=multinode-362000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_28T18_39_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:39:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-362000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:48:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:46:21 +0000   Mon, 29 Jul 2024 01:39:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:46:21 +0000   Mon, 29 Jul 2024 01:39:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:46:21 +0000   Mon, 29 Jul 2024 01:39:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:46:21 +0000   Mon, 29 Jul 2024 01:46:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-362000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee8485d122634f2ba2c091f404ffd7bc
	  System UUID:                81224f45-0000-0000-b808-288a2b40595b
	  Boot ID:                    2580fb4a-5ea5-439d-b681-ce79f201b6f4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8hq8g                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m41s
	  kube-system                 coredns-7db6d8ff4d-8npcw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m47s
	  kube-system                 etcd-multinode-362000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m1s
	  kube-system                 kindnet-4mw5v                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m48s
	  kube-system                 kube-apiserver-multinode-362000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m1s
	  kube-system                 kube-controller-manager-multinode-362000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m1s
	  kube-system                 kube-proxy-tz5h5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m48s
	  kube-system                 kube-scheduler-multinode-362000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m2s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m46s                  kube-proxy       
	  Normal  Starting                 2m54s                  kube-proxy       
	  Normal  NodeHasSufficientPID     9m1s                   kubelet          Node multinode-362000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m1s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m1s                   kubelet          Node multinode-362000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m1s                   kubelet          Node multinode-362000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 9m1s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m48s                  node-controller  Node multinode-362000 event: Registered Node multinode-362000 in Controller
	  Normal  NodeReady                8m32s                  kubelet          Node multinode-362000 status is now: NodeReady
	  Normal  Starting                 2m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m59s (x8 over 2m59s)  kubelet          Node multinode-362000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m59s (x8 over 2m59s)  kubelet          Node multinode-362000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m59s (x7 over 2m59s)  kubelet          Node multinode-362000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m44s                  node-controller  Node multinode-362000 event: Registered Node multinode-362000 in Controller
	
	
	Name:               multinode-362000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-362000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=multinode-362000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_28T18_46_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:46:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-362000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:48:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:47:14 +0000   Mon, 29 Jul 2024 01:46:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:47:14 +0000   Mon, 29 Jul 2024 01:46:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:47:14 +0000   Mon, 29 Jul 2024 01:46:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:47:14 +0000   Mon, 29 Jul 2024 01:47:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.14
	  Hostname:    multinode-362000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 06c6d7d6883745c9b9f3a798ff6cbfa2
	  System UUID:                80374d1a-0000-0000-bdda-22c83e05ebd1
	  Boot ID:                    cc92abce-c36c-4aa2-8603-cfe21806d49f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-txz4v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kindnet-8hhwv              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m6s
	  kube-system                 kube-proxy-dzz6p           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 7m59s                kube-proxy  
	  Normal  Starting                 116s                 kube-proxy  
	  Normal  NodeHasSufficientMemory  8m7s (x2 over 8m7s)  kubelet     Node multinode-362000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m7s (x2 over 8m7s)  kubelet     Node multinode-362000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m7s (x2 over 8m7s)  kubelet     Node multinode-362000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m7s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m44s                kubelet     Node multinode-362000-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  118s (x2 over 118s)  kubelet     Node multinode-362000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x2 over 118s)  kubelet     Node multinode-362000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x2 over 118s)  kubelet     Node multinode-362000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  118s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                103s                 kubelet     Node multinode-362000-m02 status is now: NodeReady
	
	
	Name:               multinode-362000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-362000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=multinode-362000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_28T18_44_56_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:44:55 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-362000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:45:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 01:45:17 +0000   Mon, 29 Jul 2024 01:46:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 01:45:17 +0000   Mon, 29 Jul 2024 01:46:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 01:45:17 +0000   Mon, 29 Jul 2024 01:46:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 01:45:17 +0000   Mon, 29 Jul 2024 01:46:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.15
	  Hostname:    multinode-362000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1da7a85216fe4b7485b60daeaa8b8656
	  System UUID:                5cda4c06-0000-0000-808b-dbe144e26e44
	  Boot ID:                    96dfdc45-786c-4a7f-bc88-a9192001a90d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5dhhf       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m2s
	  kube-system                 kube-proxy-7gm24    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m2s (x2 over 4m2s)  kubelet          Node multinode-362000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x2 over 4m2s)  kubelet          Node multinode-362000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x2 over 4m2s)  kubelet          Node multinode-362000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m58s                node-controller  Node multinode-362000-m03 event: Registered Node multinode-362000-m03 in Controller
	  Normal  NodeReady                3m40s                kubelet          Node multinode-362000-m03 status is now: NodeReady
	  Normal  RegisteredNode           2m44s                node-controller  Node multinode-362000-m03 event: Registered Node multinode-362000-m03 in Controller
	  Normal  NodeNotReady             2m3s                 node-controller  Node multinode-362000-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +5.676478] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007015] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.573471] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.245078] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.641182] systemd-fstab-generator[468]: Ignoring "noauto" option for root device
	[  +0.100501] systemd-fstab-generator[480]: Ignoring "noauto" option for root device
	[  +1.858081] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.253358] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.054551] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.043633] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.103559] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +2.457224] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +0.104507] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +0.104698] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +0.139348] systemd-fstab-generator[1158]: Ignoring "noauto" option for root device
	[  +0.405917] systemd-fstab-generator[1286]: Ignoring "noauto" option for root device
	[  +1.680498] systemd-fstab-generator[1420]: Ignoring "noauto" option for root device
	[  +0.051941] kauditd_printk_skb: 202 callbacks suppressed
	[Jul29 01:46] kauditd_printk_skb: 90 callbacks suppressed
	[  +2.433331] systemd-fstab-generator[2245]: Ignoring "noauto" option for root device
	[  +8.348659] kauditd_printk_skb: 42 callbacks suppressed
	[ +32.493519] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [45a82e7e6550] <==
	{"level":"info","ts":"2024-07-29T01:45:59.254378Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T01:45:59.255112Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-07-29T01:45:59.255139Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-07-29T01:45:59.252271Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T01:45:59.255429Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T01:45:59.259885Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T01:45:59.258261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2024-07-29T01:45:59.260751Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-07-29T01:45:59.260894Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:45:59.26121Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:45:59.258274Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T01:46:00.530366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T01:46:00.530414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T01:46:00.530431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-07-29T01:46:00.53044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T01:46:00.530445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 3"}
	{"level":"info","ts":"2024-07-29T01:46:00.530452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T01:46:00.530458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 3"}
	{"level":"info","ts":"2024-07-29T01:46:00.536523Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-362000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T01:46:00.53663Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T01:46:00.536924Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T01:46:00.54008Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T01:46:00.540187Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T01:46:00.541983Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-07-29T01:46:00.542384Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [e54a6e4f589e] <==
	{"level":"info","ts":"2024-07-29T01:39:52.606096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T01:39:52.606104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-07-29T01:39:52.606111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T01:39:52.606117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-07-29T01:39:52.611542Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-362000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T01:39:52.6118Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T01:39:52.616009Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T01:39:52.618374Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:39:52.622344Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T01:39:52.622402Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T01:39:52.623812Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T01:39:52.624929Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-07-29T01:39:52.624972Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:39:52.627332Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:39:52.62747Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:45:30.665721Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T01:45:30.665782Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-362000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"]}
	{"level":"warn","ts":"2024-07-29T01:45:30.665853Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T01:45:30.665912Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T01:45:30.679536Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.13:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T01:45:30.679563Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.13:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T01:45:30.679595Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e0290fa3161c5471","current-leader-member-id":"e0290fa3161c5471"}
	{"level":"info","ts":"2024-07-29T01:45:30.685001Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-07-29T01:45:30.685092Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-07-29T01:45:30.685101Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-362000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"]}
	
	
	==> kernel <==
	 01:48:58 up 3 min,  0 users,  load average: 0.11, 0.10, 0.04
	Linux multinode-362000 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a44317c7df72] <==
	I0729 01:44:44.886120       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:44:44.886305       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:44:54.890180       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:44:54.890261       1 main.go:299] handling current node
	I0729 01:44:54.890284       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:44:54.890298       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:45:04.890630       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:45:04.890685       1 main.go:299] handling current node
	I0729 01:45:04.890698       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:45:04.890702       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:45:04.890968       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0729 01:45:04.891002       1 main.go:322] Node multinode-362000-m03 has CIDR [10.244.2.0/24] 
	I0729 01:45:04.891168       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.15 Flags: [] Table: 0} 
	I0729 01:45:14.886330       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:45:14.886539       1 main.go:299] handling current node
	I0729 01:45:14.886722       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:45:14.886831       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:45:14.887145       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0729 01:45:14.887270       1 main.go:322] Node multinode-362000-m03 has CIDR [10.244.2.0/24] 
	I0729 01:45:24.888291       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:45:24.888417       1 main.go:299] handling current node
	I0729 01:45:24.888436       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:45:24.888445       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:45:24.888810       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0729 01:45:24.888908       1 main.go:322] Node multinode-362000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [d86bac2cfe90] <==
	I0729 01:48:14.123806       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:48:24.122456       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:48:24.122481       1 main.go:299] handling current node
	I0729 01:48:24.122493       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:48:24.122498       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:48:24.122830       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0729 01:48:24.122841       1 main.go:322] Node multinode-362000-m03 has CIDR [10.244.2.0/24] 
	I0729 01:48:34.119619       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:48:34.119688       1 main.go:299] handling current node
	I0729 01:48:34.119710       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:48:34.119720       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:48:34.120127       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0729 01:48:34.120185       1 main.go:322] Node multinode-362000-m03 has CIDR [10.244.2.0/24] 
	I0729 01:48:44.119548       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:48:44.119583       1 main.go:299] handling current node
	I0729 01:48:44.119599       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:48:44.119606       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:48:44.119879       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0729 01:48:44.119939       1 main.go:322] Node multinode-362000-m03 has CIDR [10.244.2.0/24] 
	I0729 01:48:54.119591       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0729 01:48:54.119671       1 main.go:299] handling current node
	I0729 01:48:54.119690       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0729 01:48:54.119698       1 main.go:322] Node multinode-362000-m02 has CIDR [10.244.1.0/24] 
	I0729 01:48:54.119899       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0729 01:48:54.120019       1 main.go:322] Node multinode-362000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [4028d2de4506] <==
	I0729 01:46:01.487928       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 01:46:01.488241       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 01:46:01.488285       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 01:46:01.488291       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 01:46:01.488398       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 01:46:01.488750       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 01:46:01.488837       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 01:46:01.488959       1 policy_source.go:224] refreshing policies
	I0729 01:46:01.489689       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 01:46:01.491911       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 01:46:01.521712       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 01:46:01.521803       1 aggregator.go:165] initial CRD sync complete...
	I0729 01:46:01.521817       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 01:46:01.521825       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 01:46:01.521832       1 cache.go:39] Caches are synced for autoregister controller
	I0729 01:46:01.540437       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 01:46:02.394130       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 01:46:02.596881       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0729 01:46:02.597671       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 01:46:02.600057       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 01:46:03.383937       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 01:46:03.546864       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 01:46:03.563847       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 01:46:03.649748       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 01:46:03.654364       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [f4075b746de3] <==
	W0729 01:45:31.681050       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681131       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681184       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681215       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681363       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681454       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681506       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.679093       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.679418       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.679555       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681471       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681369       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681261       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681052       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.680278       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681996       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.680474       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681391       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681616       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681685       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681699       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681711       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681796       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681067       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:45:31.681275       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [898c4f8b2269] <==
	I0729 01:40:11.027101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="8.124535ms"
	I0729 01:40:11.027181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.955µs"
	I0729 01:40:25.080337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="144.077µs"
	I0729 01:40:25.091162       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.818µs"
	I0729 01:40:26.585034       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.036µs"
	I0729 01:40:26.604104       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="7.022661ms"
	I0729 01:40:26.604164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.335µs"
	I0729 01:40:29.266767       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0729 01:40:51.188661       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-362000-m02\" does not exist"
	I0729 01:40:51.198306       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-362000-m02" podCIDRs=["10.244.1.0/24"]
	I0729 01:40:54.270525       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-362000-m02"
	I0729 01:41:14.160112       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-362000-m02"
	I0729 01:41:16.670352       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.140966ms"
	I0729 01:41:16.689017       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.156378ms"
	I0729 01:41:16.689239       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.248µs"
	I0729 01:41:16.690375       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.154µs"
	I0729 01:41:18.880601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.490626ms"
	I0729 01:41:18.880810       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.371µs"
	I0729 01:41:19.267756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.930765ms"
	I0729 01:41:19.267954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.527µs"
	I0729 01:44:55.600249       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-362000-m02"
	I0729 01:44:55.600705       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-362000-m03\" does not exist"
	I0729 01:44:55.609078       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-362000-m03" podCIDRs=["10.244.2.0/24"]
	I0729 01:44:59.317662       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-362000-m03"
	I0729 01:45:17.448210       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-362000-m02"
	
	
	==> kube-controller-manager [c277772502d2] <==
	I0729 01:46:21.809231       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-362000-m03"
	I0729 01:46:34.767242       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.558µs"
	I0729 01:46:34.794124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.005288ms"
	I0729 01:46:34.794577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.795µs"
	I0729 01:46:34.804285       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.235496ms"
	I0729 01:46:34.805499       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.551µs"
	I0729 01:46:54.164629       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-362000-m02"
	I0729 01:46:54.250610       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.161004ms"
	I0729 01:46:54.252646       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="1.939812ms"
	I0729 01:46:55.094736       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.818797ms"
	I0729 01:46:55.100700       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.9224ms"
	I0729 01:46:55.108795       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.054318ms"
	I0729 01:46:55.108853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.052µs"
	I0729 01:46:59.236387       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-362000-m02\" does not exist"
	I0729 01:46:59.240214       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-362000-m02" podCIDRs=["10.244.1.0/24"]
	I0729 01:47:01.124238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.672µs"
	I0729 01:47:14.352829       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-362000-m02"
	I0729 01:47:14.361773       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.801µs"
	I0729 01:47:25.154512       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.684µs"
	I0729 01:47:25.161641       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.884µs"
	I0729 01:47:25.170145       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.966µs"
	I0729 01:47:25.327102       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="232.743µs"
	I0729 01:47:25.329072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.881µs"
	I0729 01:47:26.342570       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.213248ms"
	I0729 01:47:26.342786       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.988µs"
	
	
	==> kube-proxy [473044afd6a2] <==
	I0729 01:40:11.348502       1 server_linux.go:69] "Using iptables proxy"
	I0729 01:40:11.365653       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0729 01:40:11.402559       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 01:40:11.402601       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 01:40:11.402613       1 server_linux.go:165] "Using iptables Proxier"
	I0729 01:40:11.404701       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 01:40:11.404918       1 server.go:872] "Version info" version="v1.30.3"
	I0729 01:40:11.404927       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:40:11.405549       1 config.go:192] "Starting service config controller"
	I0729 01:40:11.405561       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 01:40:11.405574       1 config.go:101] "Starting endpoint slice config controller"
	I0729 01:40:11.405577       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 01:40:11.406068       1 config.go:319] "Starting node config controller"
	I0729 01:40:11.406074       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 01:40:11.505886       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 01:40:11.506110       1 shared_informer.go:320] Caches are synced for service config
	I0729 01:40:11.506263       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c07a07e2fabb] <==
	I0729 01:46:02.950402       1 server_linux.go:69] "Using iptables proxy"
	I0729 01:46:02.972765       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0729 01:46:03.032776       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 01:46:03.032842       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 01:46:03.032856       1 server_linux.go:165] "Using iptables Proxier"
	I0729 01:46:03.035640       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 01:46:03.035872       1 server.go:872] "Version info" version="v1.30.3"
	I0729 01:46:03.035884       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:46:03.037031       1 config.go:192] "Starting service config controller"
	I0729 01:46:03.037157       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 01:46:03.037178       1 config.go:101] "Starting endpoint slice config controller"
	I0729 01:46:03.037181       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 01:46:03.037639       1 config.go:319] "Starting node config controller"
	I0729 01:46:03.037643       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 01:46:03.137257       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 01:46:03.137315       1 shared_informer.go:320] Caches are synced for service config
	I0729 01:46:03.137837       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ef990ab76809] <==
	E0729 01:39:54.313555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 01:39:54.313606       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 01:39:54.313700       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 01:39:54.319482       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 01:39:54.319640       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 01:39:54.320028       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 01:39:54.320142       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 01:39:54.320265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 01:39:54.320317       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 01:39:54.320410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 01:39:54.320468       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 01:39:54.320533       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 01:39:54.320584       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 01:39:54.326412       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 01:39:54.326519       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 01:39:54.326657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 01:39:54.326710       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 01:39:54.326731       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 01:39:54.326795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 01:39:55.161836       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 01:39:55.161876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 01:39:55.228811       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 01:39:55.228993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0729 01:39:55.708397       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 01:45:30.680012       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fa7ef71abab8] <==
	I0729 01:46:00.029347       1 serving.go:380] Generated self-signed cert in-memory
	W0729 01:46:01.432923       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 01:46:01.432960       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 01:46:01.432968       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 01:46:01.432973       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 01:46:01.471881       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 01:46:01.472003       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:46:01.473511       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 01:46:01.473545       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 01:46:01.473819       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 01:46:01.474349       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 01:46:01.574515       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 01:46:16 multinode-362000 kubelet[1427]: E0729 01:46:16.300342    1427 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-8hq8g" podUID="d1dba4b3-d83f-47fc-beb4-89fb8b5cffa9"
	Jul 29 01:46:17 multinode-362000 kubelet[1427]: E0729 01:46:17.888999    1427 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 01:46:17 multinode-362000 kubelet[1427]: E0729 01:46:17.889660    1427 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a0fcbb6f-1182-4d9e-bc04-456f1b4de1db-config-volume podName:a0fcbb6f-1182-4d9e-bc04-456f1b4de1db nodeName:}" failed. No retries permitted until 2024-07-29 01:46:33.889636988 +0000 UTC m=+35.751381519 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a0fcbb6f-1182-4d9e-bc04-456f1b4de1db-config-volume") pod "coredns-7db6d8ff4d-8npcw" (UID: "a0fcbb6f-1182-4d9e-bc04-456f1b4de1db") : object "kube-system"/"coredns" not registered
	Jul 29 01:46:17 multinode-362000 kubelet[1427]: E0729 01:46:17.989638    1427 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jul 29 01:46:17 multinode-362000 kubelet[1427]: E0729 01:46:17.989841    1427 projected.go:200] Error preparing data for projected volume kube-api-access-qb8zl for pod default/busybox-fc5497c4f-8hq8g: object "default"/"kube-root-ca.crt" not registered
	Jul 29 01:46:17 multinode-362000 kubelet[1427]: E0729 01:46:17.990010    1427 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1dba4b3-d83f-47fc-beb4-89fb8b5cffa9-kube-api-access-qb8zl podName:d1dba4b3-d83f-47fc-beb4-89fb8b5cffa9 nodeName:}" failed. No retries permitted until 2024-07-29 01:46:33.989988115 +0000 UTC m=+35.851732648 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qb8zl" (UniqueName: "kubernetes.io/projected/d1dba4b3-d83f-47fc-beb4-89fb8b5cffa9-kube-api-access-qb8zl") pod "busybox-fc5497c4f-8hq8g" (UID: "d1dba4b3-d83f-47fc-beb4-89fb8b5cffa9") : object "default"/"kube-root-ca.crt" not registered
	Jul 29 01:46:33 multinode-362000 kubelet[1427]: I0729 01:46:33.739510    1427 scope.go:117] "RemoveContainer" containerID="1255904b9cda944c5c652af2663a8ae09597a9d67290ce8b1c6b54a9ba8a6fe0"
	Jul 29 01:46:33 multinode-362000 kubelet[1427]: I0729 01:46:33.740121    1427 scope.go:117] "RemoveContainer" containerID="bf4cf04d618777ab9b361d3927a924bbebddc7fd8578fa27d82932dc19c2af55"
	Jul 29 01:46:33 multinode-362000 kubelet[1427]: E0729 01:46:33.740264    1427 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9032906f-5102-4224-b894-d541cf7d67e7)\"" pod="kube-system/storage-provisioner" podUID="9032906f-5102-4224-b894-d541cf7d67e7"
	Jul 29 01:46:46 multinode-362000 kubelet[1427]: I0729 01:46:46.299086    1427 scope.go:117] "RemoveContainer" containerID="bf4cf04d618777ab9b361d3927a924bbebddc7fd8578fa27d82932dc19c2af55"
	Jul 29 01:46:58 multinode-362000 kubelet[1427]: E0729 01:46:58.318179    1427 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:46:58 multinode-362000 kubelet[1427]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:46:58 multinode-362000 kubelet[1427]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:46:58 multinode-362000 kubelet[1427]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:46:58 multinode-362000 kubelet[1427]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:47:58 multinode-362000 kubelet[1427]: E0729 01:47:58.318397    1427 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:47:58 multinode-362000 kubelet[1427]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:47:58 multinode-362000 kubelet[1427]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:47:58 multinode-362000 kubelet[1427]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:47:58 multinode-362000 kubelet[1427]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:48:58 multinode-362000 kubelet[1427]: E0729 01:48:58.317222    1427 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:48:58 multinode-362000 kubelet[1427]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:48:58 multinode-362000 kubelet[1427]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:48:58 multinode-362000 kubelet[1427]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:48:58 multinode-362000 kubelet[1427]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-362000 -n multinode-362000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-362000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (220.54s)

                                                
                                    
x
+
TestPreload (217.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-085000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-085000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (2m0.711996404s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-085000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-085000 image pull gcr.io/k8s-minikube/busybox: (1.33439589s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-085000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-085000: (8.361107976s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-085000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p test-preload-085000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : exit status 90 (1m22.140892909s)

                                                
                                                
-- stdout --
	* [test-preload-085000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the hyperkit driver based on existing profile
	* Starting "test-preload-085000" primary control-plane node in "test-preload-085000" cluster
	* Downloading Kubernetes v1.24.4 preload ...
	* Restarting existing hyperkit VM for "test-preload-085000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:54:11.085586    4983 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:54:11.086179    4983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:54:11.086187    4983 out.go:304] Setting ErrFile to fd 2...
	I0728 18:54:11.086194    4983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:54:11.086813    4983 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 18:54:11.088309    4983 out.go:298] Setting JSON to false
	I0728 18:54:11.110777    4983 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5022,"bootTime":1722213029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 18:54:11.110866    4983 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:54:11.132857    4983 out.go:177] * [test-preload-085000] minikube v1.33.1 on Darwin 14.5
	I0728 18:54:11.174572    4983 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:54:11.174620    4983 notify.go:220] Checking for updates...
	I0728 18:54:11.217211    4983 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:54:11.238230    4983 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 18:54:11.259418    4983 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:54:11.280259    4983 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 18:54:11.301461    4983 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:54:11.323182    4983 config.go:182] Loaded profile config "test-preload-085000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.24.4
	I0728 18:54:11.323881    4983 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:54:11.323954    4983 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:54:11.333682    4983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53270
	I0728 18:54:11.334096    4983 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:54:11.334536    4983 main.go:141] libmachine: Using API Version  1
	I0728 18:54:11.334554    4983 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:54:11.334793    4983 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:54:11.334915    4983 main.go:141] libmachine: (test-preload-085000) Calling .DriverName
	I0728 18:54:11.356195    4983 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0728 18:54:11.377308    4983 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:54:11.377894    4983 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:54:11.377943    4983 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:54:11.387599    4983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53272
	I0728 18:54:11.387932    4983 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:54:11.388268    4983 main.go:141] libmachine: Using API Version  1
	I0728 18:54:11.388283    4983 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:54:11.388489    4983 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:54:11.388606    4983 main.go:141] libmachine: (test-preload-085000) Calling .DriverName
	I0728 18:54:11.417346    4983 out.go:177] * Using the hyperkit driver based on existing profile
	I0728 18:54:11.459354    4983 start.go:297] selected driver: hyperkit
	I0728 18:54:11.459383    4983 start.go:901] validating driver "hyperkit" against &{Name:test-preload-085000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-085000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.17 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:54:11.459559    4983 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:54:11.463881    4983 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:54:11.463985    4983 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 18:54:11.472250    4983 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 18:54:11.476104    4983 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:54:11.476125    4983 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 18:54:11.476205    4983 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:54:11.476254    4983 cni.go:84] Creating CNI manager for ""
	I0728 18:54:11.476279    4983 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:54:11.476340    4983 start.go:340] cluster config:
	{Name:test-preload-085000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-085000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.17 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:54:11.476421    4983 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:54:11.518181    4983 out.go:177] * Starting "test-preload-085000" primary control-plane node in "test-preload-085000" cluster
	I0728 18:54:11.541150    4983 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0728 18:54:11.595638    4983 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4
	I0728 18:54:11.595693    4983 cache.go:56] Caching tarball of preloaded images
	I0728 18:54:11.596059    4983 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0728 18:54:11.617759    4983 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0728 18:54:11.640080    4983 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0728 18:54:11.718176    4983 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4?checksum=md5:20cbd62a1b5d1968f21881a4a0f4f59e -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4
	I0728 18:54:17.105697    4983 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0728 18:54:17.105888    4983 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0728 18:54:17.683589    4983 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on docker
	I0728 18:54:17.683721    4983 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/test-preload-085000/config.json ...
	I0728 18:54:17.684191    4983 start.go:360] acquireMachinesLock for test-preload-085000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:54:17.684288    4983 start.go:364] duration metric: took 84.779µs to acquireMachinesLock for "test-preload-085000"
	I0728 18:54:17.684308    4983 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:54:17.684321    4983 fix.go:54] fixHost starting: 
	I0728 18:54:17.684613    4983 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:54:17.684634    4983 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:54:17.694406    4983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53278
	I0728 18:54:17.694785    4983 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:54:17.695152    4983 main.go:141] libmachine: Using API Version  1
	I0728 18:54:17.695161    4983 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:54:17.695567    4983 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:54:17.695803    4983 main.go:141] libmachine: (test-preload-085000) Calling .DriverName
	I0728 18:54:17.695968    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetState
	I0728 18:54:17.696144    4983 main.go:141] libmachine: (test-preload-085000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:54:17.696213    4983 main.go:141] libmachine: (test-preload-085000) DBG | hyperkit pid from json: 4918
	I0728 18:54:17.697120    4983 main.go:141] libmachine: (test-preload-085000) DBG | hyperkit pid 4918 missing from process table
	I0728 18:54:17.697169    4983 fix.go:112] recreateIfNeeded on test-preload-085000: state=Stopped err=<nil>
	I0728 18:54:17.697201    4983 main.go:141] libmachine: (test-preload-085000) Calling .DriverName
	W0728 18:54:17.697317    4983 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:54:17.741101    4983 out.go:177] * Restarting existing hyperkit VM for "test-preload-085000" ...
	I0728 18:54:17.762008    4983 main.go:141] libmachine: (test-preload-085000) Calling .Start
	I0728 18:54:17.762270    4983 main.go:141] libmachine: (test-preload-085000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:54:17.762321    4983 main.go:141] libmachine: (test-preload-085000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/hyperkit.pid
	I0728 18:54:17.764095    4983 main.go:141] libmachine: (test-preload-085000) DBG | hyperkit pid 4918 missing from process table
	I0728 18:54:17.764115    4983 main.go:141] libmachine: (test-preload-085000) DBG | pid 4918 is in state "Stopped"
	I0728 18:54:17.764130    4983 main.go:141] libmachine: (test-preload-085000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/hyperkit.pid...
	I0728 18:54:17.764553    4983 main.go:141] libmachine: (test-preload-085000) DBG | Using UUID 7efd0e97-eeb5-4466-8bfb-33b15779799d
	I0728 18:54:17.876466    4983 main.go:141] libmachine: (test-preload-085000) DBG | Generated MAC 92:a8:e8:4a:c:ff
	I0728 18:54:17.876487    4983 main.go:141] libmachine: (test-preload-085000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=test-preload-085000
	I0728 18:54:17.876597    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:17 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7efd0e97-eeb5-4466-8bfb-33b15779799d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b04e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0728 18:54:17.876627    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:17 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7efd0e97-eeb5-4466-8bfb-33b15779799d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b04e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0728 18:54:17.876671    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:17 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7efd0e97-eeb5-4466-8bfb-33b15779799d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/test-preload-085000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/bzimage,/Users/jenkins/m
inikube-integration/19312-1006/.minikube/machines/test-preload-085000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=test-preload-085000"}
	I0728 18:54:17.876730    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:17 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7efd0e97-eeb5-4466-8bfb-33b15779799d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/test-preload-085000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload
-085000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=test-preload-085000"
	I0728 18:54:17.876746    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:17 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 18:54:17.878171    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:17 DEBUG: hyperkit: Pid is 4996
	I0728 18:54:17.878559    4983 main.go:141] libmachine: (test-preload-085000) DBG | Attempt 0
	I0728 18:54:17.878573    4983 main.go:141] libmachine: (test-preload-085000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:54:17.878682    4983 main.go:141] libmachine: (test-preload-085000) DBG | hyperkit pid from json: 4996
	I0728 18:54:17.880827    4983 main.go:141] libmachine: (test-preload-085000) DBG | Searching for 92:a8:e8:4a:c:ff in /var/db/dhcpd_leases ...
	I0728 18:54:17.880897    4983 main.go:141] libmachine: (test-preload-085000) DBG | Found 16 entries in /var/db/dhcpd_leases!
	I0728 18:54:17.880924    4983 main.go:141] libmachine: (test-preload-085000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:92:a8:e8:4a:c:ff ID:1,92:a8:e8:4a:c:ff Lease:0x66a8474b}
	I0728 18:54:17.880941    4983 main.go:141] libmachine: (test-preload-085000) DBG | Found match: 92:a8:e8:4a:c:ff
	I0728 18:54:17.880982    4983 main.go:141] libmachine: (test-preload-085000) DBG | IP: 192.169.0.17
	I0728 18:54:17.881038    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetConfigRaw
	I0728 18:54:17.881745    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetIP
	I0728 18:54:17.881924    4983 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/test-preload-085000/config.json ...
	I0728 18:54:17.882309    4983 machine.go:94] provisionDockerMachine start ...
	I0728 18:54:17.882318    4983 main.go:141] libmachine: (test-preload-085000) Calling .DriverName
	I0728 18:54:17.882469    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHHostname
	I0728 18:54:17.882564    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHPort
	I0728 18:54:17.882668    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:17.882760    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:17.882848    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHUsername
	I0728 18:54:17.882983    4983 main.go:141] libmachine: Using SSH client type: native
	I0728 18:54:17.883210    4983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb1c80c0] 0xb1cae20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0728 18:54:17.883219    4983 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 18:54:17.886556    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:17 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 18:54:17.934851    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:17 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 18:54:17.935562    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:54:17.935583    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:54:17.935592    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:54:17.935600    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:54:18.319124    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 18:54:18.319188    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 18:54:18.433539    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 18:54:18.433559    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 18:54:18.433572    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 18:54:18.433586    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 18:54:18.434501    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 18:54:18.434514    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 18:54:24.019871    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:24 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0728 18:54:24.019952    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:24 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0728 18:54:24.019961    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:24 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0728 18:54:24.044314    4983 main.go:141] libmachine: (test-preload-085000) DBG | 2024/07/28 18:54:24 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0728 18:54:28.949932    4983 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0728 18:54:28.949947    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetMachineName
	I0728 18:54:28.950089    4983 buildroot.go:166] provisioning hostname "test-preload-085000"
	I0728 18:54:28.950099    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetMachineName
	I0728 18:54:28.950205    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHHostname
	I0728 18:54:28.950300    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHPort
	I0728 18:54:28.950383    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:28.950483    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:28.950587    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHUsername
	I0728 18:54:28.950751    4983 main.go:141] libmachine: Using SSH client type: native
	I0728 18:54:28.950906    4983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb1c80c0] 0xb1cae20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0728 18:54:28.950915    4983 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-085000 && echo "test-preload-085000" | sudo tee /etc/hostname
	I0728 18:54:29.014725    4983 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-085000
	
	I0728 18:54:29.014746    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHHostname
	I0728 18:54:29.014875    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHPort
	I0728 18:54:29.014979    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:29.015066    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:29.015150    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHUsername
	I0728 18:54:29.015286    4983 main.go:141] libmachine: Using SSH client type: native
	I0728 18:54:29.015439    4983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb1c80c0] 0xb1cae20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0728 18:54:29.015451    4983 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-085000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-085000/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-085000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 18:54:29.074927    4983 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:54:29.074946    4983 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 18:54:29.074964    4983 buildroot.go:174] setting up certificates
	I0728 18:54:29.074972    4983 provision.go:84] configureAuth start
	I0728 18:54:29.074981    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetMachineName
	I0728 18:54:29.075111    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetIP
	I0728 18:54:29.075227    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHHostname
	I0728 18:54:29.075333    4983 provision.go:143] copyHostCerts
	I0728 18:54:29.075427    4983 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 18:54:29.075440    4983 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 18:54:29.075615    4983 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 18:54:29.075862    4983 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 18:54:29.075874    4983 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 18:54:29.075995    4983 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 18:54:29.076194    4983 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 18:54:29.076200    4983 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 18:54:29.076286    4983 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 18:54:29.076488    4983 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.test-preload-085000 san=[127.0.0.1 192.169.0.17 localhost minikube test-preload-085000]
	I0728 18:54:29.229196    4983 provision.go:177] copyRemoteCerts
	I0728 18:54:29.229257    4983 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 18:54:29.229273    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHHostname
	I0728 18:54:29.229403    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHPort
	I0728 18:54:29.229521    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:29.229630    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHUsername
	I0728 18:54:29.229709    4983 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/id_rsa Username:docker}
	I0728 18:54:29.263080    4983 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 18:54:29.283100    4983 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0728 18:54:29.302944    4983 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 18:54:29.322484    4983 provision.go:87] duration metric: took 247.497402ms to configureAuth
	I0728 18:54:29.322498    4983 buildroot.go:189] setting minikube options for container-runtime
	I0728 18:54:29.322632    4983 config.go:182] Loaded profile config "test-preload-085000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.24.4
	I0728 18:54:29.322646    4983 main.go:141] libmachine: (test-preload-085000) Calling .DriverName
	I0728 18:54:29.322783    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHHostname
	I0728 18:54:29.322865    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHPort
	I0728 18:54:29.322964    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:29.323052    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:29.323134    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHUsername
	I0728 18:54:29.323250    4983 main.go:141] libmachine: Using SSH client type: native
	I0728 18:54:29.323378    4983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb1c80c0] 0xb1cae20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0728 18:54:29.323386    4983 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 18:54:29.376857    4983 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 18:54:29.376869    4983 buildroot.go:70] root file system type: tmpfs
	I0728 18:54:29.376940    4983 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 18:54:29.376956    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHHostname
	I0728 18:54:29.377084    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHPort
	I0728 18:54:29.377181    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:29.377257    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:29.377353    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHUsername
	I0728 18:54:29.377475    4983 main.go:141] libmachine: Using SSH client type: native
	I0728 18:54:29.377616    4983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb1c80c0] 0xb1cae20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0728 18:54:29.377662    4983 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 18:54:29.442514    4983 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 18:54:29.442540    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHHostname
	I0728 18:54:29.442667    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHPort
	I0728 18:54:29.442762    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:29.442870    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:29.442955    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHUsername
	I0728 18:54:29.443111    4983 main.go:141] libmachine: Using SSH client type: native
	I0728 18:54:29.443245    4983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb1c80c0] 0xb1cae20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0728 18:54:29.443261    4983 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 18:54:31.051256    4983 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0728 18:54:31.051273    4983 machine.go:97] duration metric: took 13.169051378s to provisionDockerMachine
	I0728 18:54:31.051285    4983 start.go:293] postStartSetup for "test-preload-085000" (driver="hyperkit")
	I0728 18:54:31.051299    4983 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 18:54:31.051310    4983 main.go:141] libmachine: (test-preload-085000) Calling .DriverName
	I0728 18:54:31.051498    4983 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 18:54:31.051512    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHHostname
	I0728 18:54:31.051610    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHPort
	I0728 18:54:31.051700    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:31.051812    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHUsername
	I0728 18:54:31.051902    4983 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/id_rsa Username:docker}
	I0728 18:54:31.092927    4983 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 18:54:31.097299    4983 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 18:54:31.097313    4983 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 18:54:31.097425    4983 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 18:54:31.097618    4983 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 18:54:31.097828    4983 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 18:54:31.110059    4983 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 18:54:31.137509    4983 start.go:296] duration metric: took 86.20875ms for postStartSetup
	I0728 18:54:31.137533    4983 fix.go:56] duration metric: took 13.453314358s for fixHost
	I0728 18:54:31.137546    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHHostname
	I0728 18:54:31.137680    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHPort
	I0728 18:54:31.137768    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:31.137873    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:31.137972    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHUsername
	I0728 18:54:31.138096    4983 main.go:141] libmachine: Using SSH client type: native
	I0728 18:54:31.138243    4983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb1c80c0] 0xb1cae20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0728 18:54:31.138250    4983 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0728 18:54:31.189673    4983 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722218071.346063654
	
	I0728 18:54:31.189683    4983 fix.go:216] guest clock: 1722218071.346063654
	I0728 18:54:31.189688    4983 fix.go:229] Guest: 2024-07-28 18:54:31.346063654 -0700 PDT Remote: 2024-07-28 18:54:31.137536 -0700 PDT m=+20.086897041 (delta=208.527654ms)
	I0728 18:54:31.189710    4983 fix.go:200] guest clock delta is within tolerance: 208.527654ms
	I0728 18:54:31.189713    4983 start.go:83] releasing machines lock for "test-preload-085000", held for 13.505517043s
	I0728 18:54:31.189732    4983 main.go:141] libmachine: (test-preload-085000) Calling .DriverName
	I0728 18:54:31.189868    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetIP
	I0728 18:54:31.189963    4983 main.go:141] libmachine: (test-preload-085000) Calling .DriverName
	I0728 18:54:31.190290    4983 main.go:141] libmachine: (test-preload-085000) Calling .DriverName
	I0728 18:54:31.190387    4983 main.go:141] libmachine: (test-preload-085000) Calling .DriverName
	I0728 18:54:31.190478    4983 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 18:54:31.190507    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHHostname
	I0728 18:54:31.190526    4983 ssh_runner.go:195] Run: cat /version.json
	I0728 18:54:31.190544    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHHostname
	I0728 18:54:31.190595    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHPort
	I0728 18:54:31.190628    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHPort
	I0728 18:54:31.190680    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:31.190712    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHKeyPath
	I0728 18:54:31.190778    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHUsername
	I0728 18:54:31.190799    4983 main.go:141] libmachine: (test-preload-085000) Calling .GetSSHUsername
	I0728 18:54:31.190891    4983 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/id_rsa Username:docker}
	I0728 18:54:31.190918    4983 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/test-preload-085000/id_rsa Username:docker}
	I0728 18:54:31.270443    4983 ssh_runner.go:195] Run: systemctl --version
	I0728 18:54:31.275591    4983 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0728 18:54:31.279773    4983 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 18:54:31.279811    4983 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0728 18:54:31.292882    4983 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 18:54:31.292893    4983 start.go:495] detecting cgroup driver to use...
	I0728 18:54:31.293006    4983 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:54:31.307781    4983 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0728 18:54:31.316049    4983 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 18:54:31.324442    4983 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 18:54:31.324480    4983 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 18:54:31.332901    4983 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:54:31.341352    4983 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 18:54:31.349649    4983 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:54:31.357942    4983 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 18:54:31.366424    4983 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 18:54:31.374688    4983 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 18:54:31.383050    4983 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 18:54:31.391429    4983 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 18:54:31.398961    4983 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 18:54:31.406642    4983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:54:31.501464    4983 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 18:54:31.521015    4983 start.go:495] detecting cgroup driver to use...
	I0728 18:54:31.521091    4983 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 18:54:31.536249    4983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:54:31.547064    4983 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 18:54:31.567647    4983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:54:31.579043    4983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:54:31.589352    4983 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 18:54:31.609155    4983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:54:31.619372    4983 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:54:31.634382    4983 ssh_runner.go:195] Run: which cri-dockerd
	I0728 18:54:31.637386    4983 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 18:54:31.644602    4983 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 18:54:31.658067    4983 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 18:54:31.762568    4983 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 18:54:31.862644    4983 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 18:54:31.862712    4983 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 18:54:31.876651    4983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:54:31.971619    4983 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:55:32.996664    4983 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.025465259s)
	I0728 18:55:32.996732    4983 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0728 18:55:33.032286    4983 out.go:177] 
	W0728 18:55:33.053278    4983 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 01:54:29 test-preload-085000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:54:29 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:29.889188262Z" level=info msg="Starting up"
	Jul 29 01:54:29 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:29.889678786Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 01:54:29 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:29.890279314Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=493
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.906879787Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.921711388Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.921733304Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.921798302Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.921833052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.921977677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.922016606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.922130822Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.922165714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.922178815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.922186554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.922302050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.922507118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.924134780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.924150754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.924229911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.924262649Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.924384045Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.924428810Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927470650Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927517196Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927531063Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927553741Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927564884Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927616753Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927781948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927854596Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927887935Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927899437Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927908293Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927919388Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927927197Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927936157Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927945475Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927953556Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927961465Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927968455Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928008147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928027232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928038577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928047329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928055458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928063928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928071364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928079055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928086876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928097954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928105489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928112914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928120423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928129914Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928142199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928152867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928160729Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928182847Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928193093Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928200156Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928207820Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928214732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928222383Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928228803Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928362359Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928415176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928443751Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928478847Z" level=info msg="containerd successfully booted in 0.022412s"
	Jul 29 01:54:30 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:30.910613960Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 01:54:30 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:30.929549846Z" level=info msg="Loading containers: start."
	Jul 29 01:54:31 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:31.073293640Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 01:54:31 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:31.131910888Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 01:54:31 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:31.177035132Z" level=warning msg="error locating sandbox id e474d1f0a058e18583fed1b7444beba5b4663e2141dd5241bcabb6ddc52aa6dc: sandbox e474d1f0a058e18583fed1b7444beba5b4663e2141dd5241bcabb6ddc52aa6dc not found"
	Jul 29 01:54:31 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:31.177279357Z" level=info msg="Loading containers: done."
	Jul 29 01:54:31 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:31.183890699Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 01:54:31 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:31.184046208Z" level=info msg="Daemon has completed initialization"
	Jul 29 01:54:31 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:31.205263100Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 01:54:31 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:31.205389339Z" level=info msg="API listen on [::]:2376"
	Jul 29 01:54:31 test-preload-085000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 01:54:32 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:32.140441353Z" level=info msg="Processing signal 'terminated'"
	Jul 29 01:54:32 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:32.141311904Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 01:54:32 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:32.141674983Z" level=info msg="Daemon shutdown complete"
	Jul 29 01:54:32 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:32.141784089Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 01:54:32 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:32.141784568Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 01:54:32 test-preload-085000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 01:54:33 test-preload-085000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 01:54:33 test-preload-085000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 01:54:33 test-preload-085000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:54:33 test-preload-085000 dockerd[909]: time="2024-07-29T01:54:33.178389858Z" level=info msg="Starting up"
	Jul 29 01:55:33 test-preload-085000 dockerd[909]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 01:55:33 test-preload-085000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 01:55:33 test-preload-085000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 01:55:33 test-preload-085000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 01:54:29 test-preload-085000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:54:29 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:29.889188262Z" level=info msg="Starting up"
	Jul 29 01:54:29 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:29.889678786Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 01:54:29 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:29.890279314Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=493
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.906879787Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.921711388Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.921733304Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.921798302Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.921833052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.921977677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.922016606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.922130822Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.922165714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.922178815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.922186554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.922302050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.922507118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.924134780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.924150754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.924229911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.924262649Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.924384045Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.924428810Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927470650Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927517196Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927531063Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927553741Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927564884Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927616753Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927781948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927854596Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927887935Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927899437Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927908293Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927919388Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927927197Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927936157Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927945475Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927953556Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927961465Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.927968455Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928008147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928027232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928038577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928047329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928055458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928063928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928071364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928079055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928086876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928097954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928105489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928112914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928120423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928129914Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928142199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928152867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928160729Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928182847Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928193093Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928200156Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928207820Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928214732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928222383Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928228803Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928362359Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928415176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928443751Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 01:54:29 test-preload-085000 dockerd[493]: time="2024-07-29T01:54:29.928478847Z" level=info msg="containerd successfully booted in 0.022412s"
	Jul 29 01:54:30 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:30.910613960Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 01:54:30 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:30.929549846Z" level=info msg="Loading containers: start."
	Jul 29 01:54:31 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:31.073293640Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 01:54:31 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:31.131910888Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 01:54:31 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:31.177035132Z" level=warning msg="error locating sandbox id e474d1f0a058e18583fed1b7444beba5b4663e2141dd5241bcabb6ddc52aa6dc: sandbox e474d1f0a058e18583fed1b7444beba5b4663e2141dd5241bcabb6ddc52aa6dc not found"
	Jul 29 01:54:31 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:31.177279357Z" level=info msg="Loading containers: done."
	Jul 29 01:54:31 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:31.183890699Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 01:54:31 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:31.184046208Z" level=info msg="Daemon has completed initialization"
	Jul 29 01:54:31 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:31.205263100Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 01:54:31 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:31.205389339Z" level=info msg="API listen on [::]:2376"
	Jul 29 01:54:31 test-preload-085000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 01:54:32 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:32.140441353Z" level=info msg="Processing signal 'terminated'"
	Jul 29 01:54:32 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:32.141311904Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 01:54:32 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:32.141674983Z" level=info msg="Daemon shutdown complete"
	Jul 29 01:54:32 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:32.141784089Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 01:54:32 test-preload-085000 dockerd[486]: time="2024-07-29T01:54:32.141784568Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 01:54:32 test-preload-085000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 01:54:33 test-preload-085000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 01:54:33 test-preload-085000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 01:54:33 test-preload-085000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 01:54:33 test-preload-085000 dockerd[909]: time="2024-07-29T01:54:33.178389858Z" level=info msg="Starting up"
	Jul 29 01:55:33 test-preload-085000 dockerd[909]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 01:55:33 test-preload-085000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 01:55:33 test-preload-085000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 01:55:33 test-preload-085000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0728 18:55:33.053404    4983 out.go:239] * 
	* 
	W0728 18:55:33.054797    4983 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:55:33.138059    4983 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:68: out/minikube-darwin-amd64 start -p test-preload-085000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit  failed: exit status 90
panic.go:626: *** TestPreload FAILED at 2024-07-28 18:55:33.184718 -0700 PDT m=+4174.945001237
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-085000 -n test-preload-085000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-085000 -n test-preload-085000: exit status 6 (150.717635ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 18:55:33.323097    5005 status.go:417] kubeconfig endpoint: get endpoint: "test-preload-085000" does not appear in /Users/jenkins/minikube-integration/19312-1006/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-085000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-085000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-085000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-085000: (5.239020312s)
--- FAIL: TestPreload (217.95s)

                                                
                                    
x
+
TestScheduledStopUnix (141.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-787000 --memory=2048 --driver=hyperkit 
E0728 18:55:50.053405    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 18:56:00.968802    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-787000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.651158869s)

                                                
                                                
-- stdout --
	* [scheduled-stop-787000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-787000" primary control-plane node in "scheduled-stop-787000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-787000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:f8:16:9:63:ae
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-787000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d2:ce:eb:19:81:b0
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d2:ce:eb:19:81:b0
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-787000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-787000" primary control-plane node in "scheduled-stop-787000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-787000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:f8:16:9:63:ae
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-787000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d2:ce:eb:19:81:b0
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d2:ce:eb:19:81:b0
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-28 18:57:55.255776 -0700 PDT m=+4316.986459511
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-787000 -n scheduled-stop-787000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-787000 -n scheduled-stop-787000: exit status 7 (77.240758ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 18:57:55.330969    5104 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0728 18:57:55.330992    5104 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-787000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-787000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-787000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-787000: (5.248697879s)
--- FAIL: TestScheduledStopUnix (141.98s)

                                                
                                    
x
+
TestKubernetesUpgrade (767.76s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-572000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-572000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (51.541392269s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-572000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-572000: (2.359393818s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-572000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-572000 status --format={{.Host}}: exit status 7 (67.252445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-572000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit 
E0728 19:15:44.074317    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 19:15:50.069249    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 19:16:00.984576    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 19:18:53.135358    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 19:19:37.095714    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:20:50.063986    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 19:21:00.147403    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:21:00.980758    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 19:24:37.091448    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:25:50.059642    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 19:26:00.976177    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-572000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit : exit status 90 (11m48.343641081s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-572000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "kubernetes-upgrade-572000" primary control-plane node in "kubernetes-upgrade-572000" cluster
	* Restarting existing hyperkit VM for "kubernetes-upgrade-572000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 19:14:51.799491    6066 out.go:291] Setting OutFile to fd 1 ...
	I0728 19:14:51.799759    6066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 19:14:51.799764    6066 out.go:304] Setting ErrFile to fd 2...
	I0728 19:14:51.799768    6066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 19:14:51.799952    6066 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 19:14:51.801361    6066 out.go:298] Setting JSON to false
	I0728 19:14:51.824129    6066 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":6262,"bootTime":1722213029,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 19:14:51.824217    6066 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 19:14:51.850423    6066 out.go:177] * [kubernetes-upgrade-572000] minikube v1.33.1 on Darwin 14.5
	I0728 19:14:51.892250    6066 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 19:14:51.892289    6066 notify.go:220] Checking for updates...
	I0728 19:14:51.934370    6066 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 19:14:51.955206    6066 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 19:14:51.977361    6066 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 19:14:51.998341    6066 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 19:14:52.021088    6066 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 19:14:52.040689    6066 config.go:182] Loaded profile config "kubernetes-upgrade-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0728 19:14:52.041052    6066 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:14:52.041091    6066 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:14:52.050126    6066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53769
	I0728 19:14:52.050519    6066 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:14:52.050937    6066 main.go:141] libmachine: Using API Version  1
	I0728 19:14:52.050952    6066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:14:52.051206    6066 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:14:52.051338    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .DriverName
	I0728 19:14:52.051529    6066 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 19:14:52.051797    6066 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:14:52.051830    6066 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:14:52.060237    6066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53771
	I0728 19:14:52.060577    6066 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:14:52.060916    6066 main.go:141] libmachine: Using API Version  1
	I0728 19:14:52.060925    6066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:14:52.061153    6066 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:14:52.061279    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .DriverName
	I0728 19:14:52.089297    6066 out.go:177] * Using the hyperkit driver based on existing profile
	I0728 19:14:52.130424    6066 start.go:297] selected driver: hyperkit
	I0728 19:14:52.130436    6066 start.go:901] validating driver "hyperkit" against &{Name:kubernetes-upgrade-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.20 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 19:14:52.130542    6066 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 19:14:52.133357    6066 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 19:14:52.133455    6066 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 19:14:52.141557    6066 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 19:14:52.145233    6066 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:14:52.145254    6066 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 19:14:52.145393    6066 cni.go:84] Creating CNI manager for ""
	I0728 19:14:52.145409    6066 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 19:14:52.145452    6066 start.go:340] cluster config:
	{Name:kubernetes-upgrade-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-572000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.20 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 19:14:52.145538    6066 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 19:14:52.186294    6066 out.go:177] * Starting "kubernetes-upgrade-572000" primary control-plane node in "kubernetes-upgrade-572000" cluster
	I0728 19:14:52.206498    6066 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0728 19:14:52.206521    6066 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0728 19:14:52.206537    6066 cache.go:56] Caching tarball of preloaded images
	I0728 19:14:52.206642    6066 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 19:14:52.206651    6066 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0728 19:14:52.206721    6066 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/kubernetes-upgrade-572000/config.json ...
	I0728 19:14:52.207122    6066 start.go:360] acquireMachinesLock for kubernetes-upgrade-572000: {Name:mkef7f2112c4918eb4f7118502f77c7d1d6595a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 19:25:22.582094    6066 start.go:364] duration metric: took 10m30.383613861s to acquireMachinesLock for "kubernetes-upgrade-572000"
	I0728 19:25:22.582148    6066 start.go:96] Skipping create...Using existing machine configuration
	I0728 19:25:22.582158    6066 fix.go:54] fixHost starting: 
	I0728 19:25:22.582460    6066 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 19:25:22.582476    6066 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 19:25:22.591119    6066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53945
	I0728 19:25:22.591479    6066 main.go:141] libmachine: () Calling .GetVersion
	I0728 19:25:22.591851    6066 main.go:141] libmachine: Using API Version  1
	I0728 19:25:22.591895    6066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 19:25:22.592105    6066 main.go:141] libmachine: () Calling .GetMachineName
	I0728 19:25:22.592230    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .DriverName
	I0728 19:25:22.592330    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetState
	I0728 19:25:22.592427    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:25:22.592510    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | hyperkit pid from json: 5984
	I0728 19:25:22.593422    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | hyperkit pid 5984 missing from process table
	I0728 19:25:22.593457    6066 fix.go:112] recreateIfNeeded on kubernetes-upgrade-572000: state=Stopped err=<nil>
	I0728 19:25:22.593473    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .DriverName
	W0728 19:25:22.593554    6066 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 19:25:22.647114    6066 out.go:177] * Restarting existing hyperkit VM for "kubernetes-upgrade-572000" ...
	I0728 19:25:22.668110    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .Start
	I0728 19:25:22.668266    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:25:22.668285    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/hyperkit.pid
	I0728 19:25:22.668330    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | Using UUID 86120644-e1ee-43d9-9270-270c572b70d5
	I0728 19:25:22.694541    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | Generated MAC 7e:ed:ce:d:8e:99
	I0728 19:25:22.694560    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kubernetes-upgrade-572000
	I0728 19:25:22.694722    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"86120644-e1ee-43d9-9270-270c572b70d5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003121e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0728 19:25:22.694782    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"86120644-e1ee-43d9-9270-270c572b70d5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003121e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0728 19:25:22.694845    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "86120644-e1ee-43d9-9270-270c572b70d5", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/kubernetes-upgrade-572000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ku
bernetes-upgrade-572000/bzimage,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kubernetes-upgrade-572000"}
	I0728 19:25:22.694889    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 86120644-e1ee-43d9-9270-270c572b70d5 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/kubernetes-upgrade-572000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/tty,log=/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/console-ring -f kexec,/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/bzimage,/Users/jenkins/minikube-integr
ation/19312-1006/.minikube/machines/kubernetes-upgrade-572000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kubernetes-upgrade-572000"
	I0728 19:25:22.694902    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0728 19:25:22.696276    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:22 DEBUG: hyperkit: Pid is 6487
	I0728 19:25:22.696790    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | Attempt 0
	I0728 19:25:22.696812    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 19:25:22.696883    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | hyperkit pid from json: 6487
	I0728 19:25:22.698820    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | Searching for 7e:ed:ce:d:8e:99 in /var/db/dhcpd_leases ...
	I0728 19:25:22.699138    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0728 19:25:22.699158    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:7e:ed:ce:d:8e:99 ID:1,7e:ed:ce:d:8e:99 Lease:0x66a6fb1b}
	I0728 19:25:22.699193    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | Found match: 7e:ed:ce:d:8e:99
	I0728 19:25:22.699218    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | IP: 192.169.0.20
	I0728 19:25:22.699263    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetConfigRaw
	I0728 19:25:22.700038    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetIP
	I0728 19:25:22.700237    6066 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/kubernetes-upgrade-572000/config.json ...
	I0728 19:25:22.700759    6066 machine.go:94] provisionDockerMachine start ...
	I0728 19:25:22.700770    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .DriverName
	I0728 19:25:22.700925    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHHostname
	I0728 19:25:22.701069    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHPort
	I0728 19:25:22.701186    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:22.701297    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:22.701410    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHUsername
	I0728 19:25:22.701529    6066 main.go:141] libmachine: Using SSH client type: native
	I0728 19:25:22.701742    6066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3b5e0c0] 0x3b60e20 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0728 19:25:22.701753    6066 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 19:25:22.704945    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:22 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0728 19:25:22.712981    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0728 19:25:22.714023    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 19:25:22.714043    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 19:25:22.714074    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 19:25:22.714085    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 19:25:23.102985    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0728 19:25:23.103001    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0728 19:25:23.217630    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0728 19:25:23.217655    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0728 19:25:23.217669    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0728 19:25:23.217678    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0728 19:25:23.218567    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0728 19:25:23.218581    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0728 19:25:28.839222    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0728 19:25:28.839242    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0728 19:25:28.839294    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0728 19:25:28.864092    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) DBG | 2024/07/28 19:25:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0728 19:25:35.871776    6066 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0728 19:25:35.871802    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetMachineName
	I0728 19:25:35.871935    6066 buildroot.go:166] provisioning hostname "kubernetes-upgrade-572000"
	I0728 19:25:35.871945    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetMachineName
	I0728 19:25:35.872056    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHHostname
	I0728 19:25:35.872159    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHPort
	I0728 19:25:35.872252    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:35.872338    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:35.872431    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHUsername
	I0728 19:25:35.872570    6066 main.go:141] libmachine: Using SSH client type: native
	I0728 19:25:35.872763    6066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3b5e0c0] 0x3b60e20 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0728 19:25:35.872772    6066 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-572000 && echo "kubernetes-upgrade-572000" | sudo tee /etc/hostname
	I0728 19:25:35.945845    6066 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-572000
	
	I0728 19:25:35.945866    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHHostname
	I0728 19:25:35.946006    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHPort
	I0728 19:25:35.946106    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:35.946197    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:35.946287    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHUsername
	I0728 19:25:35.946408    6066 main.go:141] libmachine: Using SSH client type: native
	I0728 19:25:35.946556    6066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3b5e0c0] 0x3b60e20 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0728 19:25:35.946569    6066 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-572000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-572000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-572000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 19:25:36.014973    6066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 19:25:36.014993    6066 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1006/.minikube}
	I0728 19:25:36.015006    6066 buildroot.go:174] setting up certificates
	I0728 19:25:36.015014    6066 provision.go:84] configureAuth start
	I0728 19:25:36.015022    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetMachineName
	I0728 19:25:36.015157    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetIP
	I0728 19:25:36.015242    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHHostname
	I0728 19:25:36.015340    6066 provision.go:143] copyHostCerts
	I0728 19:25:36.015439    6066 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem, removing ...
	I0728 19:25:36.015449    6066 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem
	I0728 19:25:36.015623    6066 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/ca.pem (1078 bytes)
	I0728 19:25:36.015869    6066 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem, removing ...
	I0728 19:25:36.015876    6066 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem
	I0728 19:25:36.015971    6066 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/cert.pem (1123 bytes)
	I0728 19:25:36.016159    6066 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem, removing ...
	I0728 19:25:36.016165    6066 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem
	I0728 19:25:36.016253    6066 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1006/.minikube/key.pem (1679 bytes)
	I0728 19:25:36.016413    6066 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-572000 san=[127.0.0.1 192.169.0.20 kubernetes-upgrade-572000 localhost minikube]
	I0728 19:25:36.229609    6066 provision.go:177] copyRemoteCerts
	I0728 19:25:36.229679    6066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 19:25:36.229697    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHHostname
	I0728 19:25:36.229840    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHPort
	I0728 19:25:36.229945    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:36.230061    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHUsername
	I0728 19:25:36.230181    6066 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/id_rsa Username:docker}
	I0728 19:25:36.267686    6066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 19:25:36.287812    6066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0728 19:25:36.307209    6066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 19:25:36.326823    6066 provision.go:87] duration metric: took 311.800651ms to configureAuth
	I0728 19:25:36.326844    6066 buildroot.go:189] setting minikube options for container-runtime
	I0728 19:25:36.326976    6066 config.go:182] Loaded profile config "kubernetes-upgrade-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0728 19:25:36.326989    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .DriverName
	I0728 19:25:36.327116    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHHostname
	I0728 19:25:36.327202    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHPort
	I0728 19:25:36.327294    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:36.327385    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:36.327460    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHUsername
	I0728 19:25:36.327575    6066 main.go:141] libmachine: Using SSH client type: native
	I0728 19:25:36.327704    6066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3b5e0c0] 0x3b60e20 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0728 19:25:36.327712    6066 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 19:25:36.390209    6066 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 19:25:36.390221    6066 buildroot.go:70] root file system type: tmpfs
	I0728 19:25:36.390308    6066 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 19:25:36.390323    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHHostname
	I0728 19:25:36.390457    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHPort
	I0728 19:25:36.390565    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:36.390658    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:36.390778    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHUsername
	I0728 19:25:36.390923    6066 main.go:141] libmachine: Using SSH client type: native
	I0728 19:25:36.391056    6066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3b5e0c0] 0x3b60e20 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0728 19:25:36.391100    6066 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 19:25:36.463576    6066 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 19:25:36.463604    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHHostname
	I0728 19:25:36.463737    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHPort
	I0728 19:25:36.463844    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:36.463962    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:36.464052    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHUsername
	I0728 19:25:36.464185    6066 main.go:141] libmachine: Using SSH client type: native
	I0728 19:25:36.464342    6066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3b5e0c0] 0x3b60e20 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0728 19:25:36.464353    6066 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 19:25:38.031239    6066 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0728 19:25:38.031255    6066 machine.go:97] duration metric: took 15.330699092s to provisionDockerMachine
	I0728 19:25:38.031266    6066 start.go:293] postStartSetup for "kubernetes-upgrade-572000" (driver="hyperkit")
	I0728 19:25:38.031280    6066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 19:25:38.031291    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .DriverName
	I0728 19:25:38.031513    6066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 19:25:38.031525    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHHostname
	I0728 19:25:38.031634    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHPort
	I0728 19:25:38.031730    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:38.031821    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHUsername
	I0728 19:25:38.031910    6066 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/id_rsa Username:docker}
	I0728 19:25:38.075504    6066 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 19:25:38.080356    6066 info.go:137] Remote host: Buildroot 2023.02.9
	I0728 19:25:38.080370    6066 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/addons for local assets ...
	I0728 19:25:38.080473    6066 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1006/.minikube/files for local assets ...
	I0728 19:25:38.080665    6066 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem -> 15332.pem in /etc/ssl/certs
	I0728 19:25:38.080888    6066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 19:25:38.089119    6066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/ssl/certs/15332.pem --> /etc/ssl/certs/15332.pem (1708 bytes)
	I0728 19:25:38.115460    6066 start.go:296] duration metric: took 84.185997ms for postStartSetup
	I0728 19:25:38.115486    6066 fix.go:56] duration metric: took 15.533544827s for fixHost
	I0728 19:25:38.115498    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHHostname
	I0728 19:25:38.115644    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHPort
	I0728 19:25:38.115772    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:38.115865    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:38.115971    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHUsername
	I0728 19:25:38.116104    6066 main.go:141] libmachine: Using SSH client type: native
	I0728 19:25:38.116245    6066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3b5e0c0] 0x3b60e20 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0728 19:25:38.116258    6066 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0728 19:25:38.176682    6066 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722219938.219047085
	
	I0728 19:25:38.176694    6066 fix.go:216] guest clock: 1722219938.219047085
	I0728 19:25:38.176699    6066 fix.go:229] Guest: 2024-07-28 19:25:38.219047085 -0700 PDT Remote: 2024-07-28 19:25:38.115489 -0700 PDT m=+646.359966908 (delta=103.558085ms)
	I0728 19:25:38.176716    6066 fix.go:200] guest clock delta is within tolerance: 103.558085ms
	I0728 19:25:38.176720    6066 start.go:83] releasing machines lock for "kubernetes-upgrade-572000", held for 15.594812176s
	I0728 19:25:38.176741    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .DriverName
	I0728 19:25:38.176884    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetIP
	I0728 19:25:38.177008    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .DriverName
	I0728 19:25:38.177332    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .DriverName
	I0728 19:25:38.177455    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .DriverName
	I0728 19:25:38.177516    6066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 19:25:38.177545    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHHostname
	I0728 19:25:38.177609    6066 ssh_runner.go:195] Run: cat /version.json
	I0728 19:25:38.177625    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHHostname
	I0728 19:25:38.177655    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHPort
	I0728 19:25:38.177763    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHPort
	I0728 19:25:38.177779    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:38.177851    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHUsername
	I0728 19:25:38.177896    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHKeyPath
	I0728 19:25:38.177935    6066 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/id_rsa Username:docker}
	I0728 19:25:38.178046    6066 main.go:141] libmachine: (kubernetes-upgrade-572000) Calling .GetSSHUsername
	I0728 19:25:38.178141    6066 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/kubernetes-upgrade-572000/id_rsa Username:docker}
	I0728 19:25:38.211268    6066 ssh_runner.go:195] Run: systemctl --version
	I0728 19:25:38.216162    6066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0728 19:25:38.258772    6066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 19:25:38.258829    6066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0728 19:25:38.266515    6066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0728 19:25:38.279484    6066 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 19:25:38.279504    6066 start.go:495] detecting cgroup driver to use...
	I0728 19:25:38.279610    6066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 19:25:38.299941    6066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0728 19:25:38.315724    6066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 19:25:38.326718    6066 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 19:25:38.326775    6066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 19:25:38.335769    6066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 19:25:38.344590    6066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 19:25:38.353234    6066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 19:25:38.362052    6066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 19:25:38.371321    6066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 19:25:38.380375    6066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 19:25:38.389276    6066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 19:25:38.398183    6066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 19:25:38.406381    6066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 19:25:38.414622    6066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 19:25:38.518480    6066 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 19:25:38.537317    6066 start.go:495] detecting cgroup driver to use...
	I0728 19:25:38.537404    6066 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 19:25:38.552546    6066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 19:25:38.565720    6066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 19:25:38.589888    6066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 19:25:38.601550    6066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 19:25:38.612433    6066 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 19:25:38.633274    6066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 19:25:38.643946    6066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 19:25:38.659929    6066 ssh_runner.go:195] Run: which cri-dockerd
	I0728 19:25:38.662851    6066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 19:25:38.670174    6066 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0728 19:25:38.683627    6066 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 19:25:38.780583    6066 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 19:25:38.896405    6066 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 19:25:38.896493    6066 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 19:25:38.910333    6066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 19:25:39.016856    6066 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 19:26:39.916581    6066 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.90054451s)
	I0728 19:26:39.916649    6066 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0728 19:26:39.953030    6066 out.go:177] 
	W0728 19:26:39.973727    6066 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 02:25:36 kubernetes-upgrade-572000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:36.782749742Z" level=info msg="Starting up"
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:36.783199728Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:36.783707442Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=504
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.800168079Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.819597764Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.819664777Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.819731316Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.819767509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.819949857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.820000956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.820134681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.820181219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.820214060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.820243890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.820361741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.820557752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.822179640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.822234579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.822369296Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.822415273Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.822518218Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.822569269Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.822957011Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823045509Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823090322Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823127440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823160382Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823224789Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823403611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823483182Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823525812Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823560768Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823659751Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823693931Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823724125Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823755108Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823790362Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823821284Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823852225Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823881881Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823924765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823961635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823995735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824027788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824061364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824093226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824123891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824153319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824182767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824220162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824256706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824286750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824316381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824347970Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824384098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824416465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824445993Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824519181Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824563794Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824594625Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824657446Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824701168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824732422Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824764053Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824968613Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.825057412Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.825119109Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.825156116Z" level=info msg="containerd successfully booted in 0.025842s"
	Jul 29 02:25:37 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:37.801705222Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 02:25:37 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:37.821857323Z" level=info msg="Loading containers: start."
	Jul 29 02:25:37 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:37.929655208Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 02:25:37 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:37.993053746Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 02:25:38 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:38.037941810Z" level=info msg="Loading containers: done."
	Jul 29 02:25:38 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:38.046120895Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 02:25:38 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:38.046295946Z" level=info msg="Daemon has completed initialization"
	Jul 29 02:25:38 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:38.067489207Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 02:25:38 kubernetes-upgrade-572000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 02:25:38 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:38.067623160Z" level=info msg="API listen on [::]:2376"
	Jul 29 02:25:39 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:39.090665695Z" level=info msg="Processing signal 'terminated'"
	Jul 29 02:25:39 kubernetes-upgrade-572000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 02:25:39 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:39.091734609Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 02:25:39 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:39.091818664Z" level=info msg="Daemon shutdown complete"
	Jul 29 02:25:39 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:39.091873724Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 02:25:39 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:39.091886819Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 02:25:40 kubernetes-upgrade-572000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 02:25:40 kubernetes-upgrade-572000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 02:25:40 kubernetes-upgrade-572000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 02:25:40 kubernetes-upgrade-572000 dockerd[1004]: time="2024-07-29T02:25:40.134903185Z" level=info msg="Starting up"
	Jul 29 02:26:40 kubernetes-upgrade-572000 dockerd[1004]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 02:26:40 kubernetes-upgrade-572000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 02:26:40 kubernetes-upgrade-572000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 02:26:40 kubernetes-upgrade-572000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 02:25:36 kubernetes-upgrade-572000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:36.782749742Z" level=info msg="Starting up"
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:36.783199728Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:36.783707442Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=504
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.800168079Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.819597764Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.819664777Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.819731316Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.819767509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.819949857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.820000956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.820134681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.820181219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.820214060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.820243890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.820361741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.820557752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.822179640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.822234579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.822369296Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.822415273Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.822518218Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.822569269Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.822957011Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823045509Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823090322Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823127440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823160382Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823224789Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823403611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823483182Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823525812Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823560768Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823659751Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823693931Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823724125Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823755108Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823790362Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823821284Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823852225Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823881881Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823924765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823961635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.823995735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824027788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824061364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824093226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824123891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824153319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824182767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824220162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824256706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824286750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824316381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824347970Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824384098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824416465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824445993Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824519181Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824563794Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824594625Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824657446Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824701168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824732422Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824764053Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.824968613Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.825057412Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.825119109Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 02:25:36 kubernetes-upgrade-572000 dockerd[504]: time="2024-07-29T02:25:36.825156116Z" level=info msg="containerd successfully booted in 0.025842s"
	Jul 29 02:25:37 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:37.801705222Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 02:25:37 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:37.821857323Z" level=info msg="Loading containers: start."
	Jul 29 02:25:37 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:37.929655208Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 02:25:37 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:37.993053746Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 29 02:25:38 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:38.037941810Z" level=info msg="Loading containers: done."
	Jul 29 02:25:38 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:38.046120895Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 02:25:38 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:38.046295946Z" level=info msg="Daemon has completed initialization"
	Jul 29 02:25:38 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:38.067489207Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 02:25:38 kubernetes-upgrade-572000 systemd[1]: Started Docker Application Container Engine.
	Jul 29 02:25:38 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:38.067623160Z" level=info msg="API listen on [::]:2376"
	Jul 29 02:25:39 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:39.090665695Z" level=info msg="Processing signal 'terminated'"
	Jul 29 02:25:39 kubernetes-upgrade-572000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 02:25:39 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:39.091734609Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 02:25:39 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:39.091818664Z" level=info msg="Daemon shutdown complete"
	Jul 29 02:25:39 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:39.091873724Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 02:25:39 kubernetes-upgrade-572000 dockerd[497]: time="2024-07-29T02:25:39.091886819Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 02:25:40 kubernetes-upgrade-572000 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 02:25:40 kubernetes-upgrade-572000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 02:25:40 kubernetes-upgrade-572000 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 02:25:40 kubernetes-upgrade-572000 dockerd[1004]: time="2024-07-29T02:25:40.134903185Z" level=info msg="Starting up"
	Jul 29 02:26:40 kubernetes-upgrade-572000 dockerd[1004]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 02:26:40 kubernetes-upgrade-572000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 02:26:40 kubernetes-upgrade-572000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 02:26:40 kubernetes-upgrade-572000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0728 19:26:39.973817    6066 out.go:239] * 
	* 
	W0728 19:26:39.974550    6066 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 19:26:40.058696    6066 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-amd64 start -p kubernetes-upgrade-572000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit  : exit status 90
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-572000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-572000 version --output=json: exit status 1 (37.352865ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-572000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-28 19:26:40.128989 -0700 PDT m=+6041.882469233
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-572000 -n kubernetes-upgrade-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-572000 -n kubernetes-upgrade-572000: exit status 6 (149.814943ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 19:26:40.266117    6547 status.go:417] kubeconfig endpoint: get endpoint: "kubernetes-upgrade-572000" does not appear in /Users/jenkins/minikube-integration/19312-1006/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-572000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-572000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-572000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-572000: (5.252066178s)
--- FAIL: TestKubernetesUpgrade (767.76s)

                                                
                                    
x
+
TestPause/serial/Start (145.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-082000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-082000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m25.185265257s)

                                                
                                                
-- stdout --
	* [pause-082000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-082000" primary control-plane node in "pause-082000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-082000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for fe:4b:6e:ee:26:d7
	* Failed to start hyperkit VM. Running "minikube delete -p pause-082000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ba:ed:d9:6d:d2:d5
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ba:ed:d9:6d:d2:d5
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-082000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-082000 -n pause-082000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-082000 -n pause-082000: exit status 7 (80.315457ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 19:39:05.373801    7398 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0728 19:39:05.373827    7398 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-082000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (145.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (7201.708s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-985000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
E0728 19:45:50.145436    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 19:45:54.992467    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/auto-985000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (46m5s)
	TestNetworkPlugins/group/calico (21s)
	TestNetworkPlugins/group/calico/Start (21s)
	TestNetworkPlugins/group/false (19s)
	TestNetworkPlugins/group/false/Start (19s)
	TestStartStop (7m26s)

                                                
                                                
goroutine 3468 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 11 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0008704e0, 0xc0009ffbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0005c23a8, {0x13d5db00, 0x2a, 0x2a}, {0xf832825?, 0x1136ca59?, 0x13d80ac0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0008fe5a0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0008fe5a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 12 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000644d00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 973 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x129fb840, 0xc000058300}, 0xc000582f50, 0xc0020cbf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x129fb840, 0xc000058300}, 0x0?, 0xc000582f50, 0xc000582f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x129fb840?, 0xc000058300?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000582fd0?, 0xfd6ece5?, 0xc0009b1740?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 988
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 190 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0014ae900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 189
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 89 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 88
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 3153 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc001d47d50, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x124c0700?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0014af500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d47d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013da8e0, {0x129d7b60, 0xc002088f90}, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0013da8e0, 0x3b9aca00, 0x0, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3146
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3155 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3154
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 152 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x129fb840, 0xc000058300}, 0xc000587f50, 0xc0013a4f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x129fb840, 0xc000058300}, 0x0?, 0xc000587f50, 0xc000587f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x129fb840?, 0xc000058300?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 191
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2202 [chan receive]:
testing.(*T).Run(0xc00154dd40, {0x11312f6f?, 0x12338670?}, 0xc00149e480)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00154dd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc00154dd40, 0xc000a0c900)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2195
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 151 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000a85250, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x124c0700?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0014ae7e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a85280)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000214000, {0x129d7b60, 0xc00154a030}, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000214000, 0x3b9aca00, 0x0, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 191
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3145 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0014af680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3133
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3266 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0014eeb10, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x124c0700?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0019eab40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0014eeb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0014f7d30, {0x129d7b60, 0xc0015615f0}, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0014f7d30, 0x3b9aca00, 0x0, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3283
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3300 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001f9c010, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x124c0700?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0014a22a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001f9c040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b86c60, {0x129d7b60, 0xc001ff4b10}, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b86c60, 0x3b9aca00, 0x0, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3289
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 153 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 152
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 191 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a85280, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 189
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2807 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0018f6300, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2805
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2847 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2846
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1299 [chan send, 84 minutes]:
os/exec.(*Cmd).watchCtx(0xc00178de00, 0xc001898480)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1298
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 988 [chan receive, 86 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0014ef700, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 876
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2103 [chan receive, 46 minutes]:
testing.(*T).Run(0xc0014a81a0, {0x11312f6a?, 0x472ff6620dd?}, 0xc0016905a0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0014a81a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0014a81a0, 0x129cb918)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1396 [chan send, 84 minutes]:
os/exec.(*Cmd).watchCtx(0xc00196fb00, 0xc001a2aa80)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 847
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3465 [IO wait]:
internal/poll.runtime_pollWait(0x5b724568, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001f7b260?, 0xc0008f7a21?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001f7b260, {0xc0008f7a21, 0x5df, 0x5df})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a288f0, {0xc0008f7a21?, 0x30?, 0x221?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0013b87e0, {0x129d6538, 0xc000036690})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x129d6678, 0xc0013b87e0}, {0x129d6538, 0xc000036690}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x129d6678, 0xc0013b87e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x13d1f300?, {0x129d6678?, 0xc0013b87e0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x129d6678, 0xc0013b87e0}, {0x129d65f8, 0xc000a288f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0013b8720?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3464
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3478 [IO wait]:
internal/poll.runtime_pollWait(0x5b5ab5c8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001d18f00?, 0xc001fad21e?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d18f00, {0xc001fad21e, 0x5e2, 0x5e2})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000366d8, {0xc001fad21e?, 0x5b5360c8?, 0x21e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00149e540, {0x129d6538, 0xc000a289f8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x129d6678, 0xc00149e540}, {0x129d6538, 0xc000a289f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x129d6678, 0xc00149e540})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x13d1f300?, {0x129d6678?, 0xc00149e540?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x129d6678, 0xc00149e540}, {0x129d65f8, 0xc0000366d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc00149e480?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3477
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2863 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0014a3ce0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2862
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 714 [IO wait, 108 minutes]:
internal/poll.runtime_pollWait(0x5b5abb98, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000a0c700?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000a0c700)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000a0c700)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000818060)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000818060)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0008ec0f0, {0x129ee710, 0xc000818060})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0008ec0f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc00154c1a0?, 0xc00154c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 711
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 3282 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0019eade0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3262
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 972 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0014ef6d0, 0x24)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x124c0700?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0009b1620)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0014ef700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0014f6f30, {0x129d7b60, 0xc00154af90}, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0014f6f30, 0x3b9aca00, 0x0, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 988
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3053 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3052
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 987 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0009b1740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 876
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3480 [select]:
os/exec.(*Cmd).watchCtx(0xc001d60900, 0xc0005b0b40)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3477
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 1150 [chan send, 84 minutes]:
os/exec.(*Cmd).watchCtx(0xc001608d80, 0xc0005b0ba0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1149
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3479 [IO wait]:
internal/poll.runtime_pollWait(0x5b5abc90, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001d18fc0?, 0xc0016627e0?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d18fc0, {0xc0016627e0, 0x7820, 0x7820})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000036710, {0xc0016627e0?, 0xc000105a40?, 0x7e0f?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00149e570, {0x129d6538, 0xc000a28a40})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x129d6678, 0xc00149e570}, {0x129d6538, 0xc000a28a40}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001bcae78?, {0x129d6678, 0xc00149e570})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x13d1f300?, {0x129d6678?, 0xc00149e570?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x129d6678, 0xc00149e570}, {0x129d65f8, 0xc000036710}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0018990e0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3477
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 1444 [select, 84 minutes]:
net/http.(*persistConn).readLoop(0xc001a286c0)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1428
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 3154 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x129fb840, 0xc000058300}, 0xc000588750, 0xc000588798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x129fb840, 0xc000058300}, 0x0?, 0xc000588750, 0xc000588798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x129fb840?, 0xc000058300?}, 0xc0014a9d40?, 0xf8a66a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0005887d0?, 0xf8ec9a4?, 0xc00151c300?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3146
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2656 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0014ee990, 0xe)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x124c0700?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001854840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0014ee9c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013da9f0, {0x129d7b60, 0xc00141cbd0}, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0013da9f0, 0x3b9aca00, 0x0, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2668
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3302 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3301
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3481 [select]:
golang.org/x/net/http2.(*ClientConn).Ping(0xc0001fc600, {0x129fb680, 0xc0004100e0})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:3061 +0x2c5
golang.org/x/net/http2.(*ClientConn).healthCheck(0xc0001fc600)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:876 +0xb1
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 3051 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc001d46310, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x124c0700?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0009b1ec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d46340)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0017aa9c0, {0x129d7b60, 0xc0014ecd20}, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0017aa9c0, 0x3b9aca00, 0x0, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3024
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3023 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001f70060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3047
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2864 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0005c1dc0, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2862
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3146 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d47d80, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3133
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3301 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x129fb840, 0xc000058300}, 0xc001bc9750, 0xc001bc9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x129fb840, 0xc000058300}, 0x0?, 0xc001bc9750, 0xc001bc9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x129fb840?, 0xc000058300?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3289
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 974 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 973
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2204 [chan receive]:
testing.(*T).Run(0xc0008716c0, {0x11312f6f?, 0x12338670?}, 0xc0013b8720)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0008716c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc0008716c0, 0xc000a0ca00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2195
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2191 [chan receive, 9 minutes]:
testing.(*T).Run(0xc0014a8b60, {0x11312f6a?, 0xf8a5d73?}, 0x129cbac0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0014a8b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0014a8b60, 0x129cb960)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3268 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3267
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3272 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x5b5ab8b0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001936c00?, 0xc001472000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001936c00, {0xc001472000, 0x2000, 0x2000})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc001936c00, {0xc001472000?, 0xc000650280?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000a29058, {0xc001472000?, 0xc001472005?, 0x1a?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc001d450b0, {0xc001472000?, 0x0?, 0xc001d450b0?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0008a5b30, {0x129d82a0, 0xc001d450b0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0008a5888, {0x5b64cfe8, 0xc00138f038}, 0xc0013a5980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0008a5888, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc0008a5888, {0xc001457000, 0x1000, 0xc001c3c700?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc001a45680, {0xc0006dc900, 0x9, 0x13d1a7f0?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x129d6718, 0xc001a45680}, {0xc0006dc900, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0006dc900, 0x9, 0xc0013a5dc0?}, {0x129d6718?, 0xc001a45680?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0006dc8c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0013a5fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0001fc600)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 3271
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 1445 [select, 84 minutes]:
net/http.(*persistConn).writeLoop(0xc001a286c0)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1428
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 3283 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0014eeb40, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3262
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2795 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0018f62d0, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x124c0700?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001a44ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0018f6300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00067cc60, {0x129d7b60, 0xc001a32120}, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00067cc60, 0x3b9aca00, 0x0, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2807
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1344 [chan send, 84 minutes]:
os/exec.(*Cmd).watchCtx(0xc0019d4600, 0xc0018ec000)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1343
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3052 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x129fb840, 0xc000058300}, 0xc0014d1f50, 0xc0014d1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x129fb840, 0xc000058300}, 0x7?, 0xc0014d1f50, 0xc0014d1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x129fb840?, 0xc000058300?}, 0xfcf8016?, 0xc001aa3080?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014d1fd0?, 0xf8ec9a4?, 0xc0015601e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3024
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3288 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0014a23c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3280
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2797 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2796
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3464 [syscall]:
syscall.syscall6(0xc0013b9f80?, 0x1000000000010?, 0x10000000019?, 0x5b14cbd8?, 0x90?, 0x146a2108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc00139fc68?, 0xf7730c5?, 0x90?, 0x12937e80?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xf8a39e5?, 0xc00139fc9c, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0005fa7b0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002078d80)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc002078d80)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0018761a0, 0xc002078d80)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0018761a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0018761a0, 0xc0013b8720)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2204
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3477 [syscall]:
syscall.syscall6(0xc00149ff80?, 0x1000000000010?, 0x10000000019?, 0x5b6a76f8?, 0x90?, 0x146a25b8?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0014e9c68?, 0xf7730c5?, 0x90?, 0x12937e80?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xf8a39e5?, 0xc0014e9c9c, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc00016e570)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001d60900)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001d60900)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001556000, 0xc001d60900)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc001556000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc001556000, 0xc00149e480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2202
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3024 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d46340, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3047
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3267 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x129fb840, 0xc000058300}, 0xc00197c750, 0xc00197c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x129fb840, 0xc000058300}, 0x7?, 0xc00197c750, 0xc00197c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x129fb840?, 0xc000058300?}, 0xc0015561a0?, 0xf8a66a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00197c7d0?, 0xf8ec9a4?, 0xc0017b2c60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3283
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2172 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0014a2240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2137
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2173 [chan receive, 46 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0018f68c0, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2137
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3466 [IO wait]:
internal/poll.runtime_pollWait(0x5b724850, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001f7b320?, 0xc0017708c6?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001f7b320, {0xc0017708c6, 0xb73a, 0xb73a})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a28978, {0xc0017708c6?, 0xc000105a40?, 0xfe68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0013b8810, {0x129d6538, 0xc000036698})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x129d6678, 0xc0013b8810}, {0x129d6538, 0xc000036698}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001bcae78?, {0x129d6678, 0xc0013b8810})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x13d1f300?, {0x129d6678?, 0xc0013b8810?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x129d6678, 0xc0013b8810}, {0x129d65f8, 0xc000a28978}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0018990e0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3464
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2176 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0018f6890, 0x1b)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x124c0700?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0014a2120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0018f68c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008a74b0, {0x129d7b60, 0xc0017b1890}, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008a74b0, 0x3b9aca00, 0x0, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2173
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2177 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x129fb840, 0xc000058300}, 0xc001982750, 0xc0014e4f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x129fb840, 0xc000058300}, 0x74?, 0xc001982750, 0xc001982798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x129fb840?, 0xc000058300?}, 0x205d3539313a6f67?, 0x69622f203a6e7552?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x2066746e69727020?, 0x746e757222207325?, 0x70646e652d656d69?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2173
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2178 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2177
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3289 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001f9c040, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3280
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2195 [chan receive]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00154c340, 0xc0016905a0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2103
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2616 [chan receive, 9 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001556680, 0x129cbac0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2191
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2618 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc00088f090)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015569c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015569c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015569c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0015569c0, 0xc000a16680)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2616
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2667 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001854960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2652
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2668 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0014ee9c0, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2652
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2620 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc00088f090)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001556d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001556d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001556d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001556d00, 0xc000a16700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2616
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2617 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc00088f090)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001556820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001556820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001556820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001556820, 0xc000a16640)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2616
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2619 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc00088f090)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001556b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001556b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001556b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001556b60, 0xc000a166c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2616
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3467 [select]:
os/exec.(*Cmd).watchCtx(0xc002078d80, 0xc001898420)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3464
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2621 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc00088f090)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001556ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001556ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001556ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001556ea0, 0xc000a16740)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2616
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2622 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc00088f090)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001557040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001557040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001557040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001557040, 0xc000a167c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2616
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2796 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x129fb840, 0xc000058300}, 0xc001982750, 0xc001982798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x129fb840, 0xc000058300}, 0x74?, 0xc001982750, 0xc001982798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x129fb840?, 0xc000058300?}, 0x205d3539313a6f67?, 0x69622f203a6e7552?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0019827d0?, 0xf8ec9a4?, 0x70646e652d656d69?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2807
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2845 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0005c1d90, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x124c0700?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0014a3bc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0005c1dc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0014f3c90, {0x129d7b60, 0xc001cda0f0}, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0014f3c90, 0x3b9aca00, 0x0, 0x1, 0xc000058300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2864
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2806 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001a44cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2805
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2846 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x129fb840, 0xc000058300}, 0xc001983f50, 0xc001983f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x129fb840, 0xc000058300}, 0xe0?, 0xc001983f50, 0xc001983f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x129fb840?, 0xc000058300?}, 0xc001876680?, 0xf8a66a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001983fd0?, 0xf8ec9a4?, 0xc000a118c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2864
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2674 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2673
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2673 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x129fb840, 0xc000058300}, 0xc000582750, 0xc000582798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x129fb840, 0xc000058300}, 0x20?, 0xc000582750, 0xc000582798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x129fb840?, 0xc000058300?}, 0xc0014a8901?, 0xc000058300?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0005827d0?, 0xf8ec9a4?, 0xc0018ec401?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2668
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                    

Test pass (182/227)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.71
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.28
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.2
12 TestDownloadOnly/v1.30.3/json-events 15.6
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.29
18 TestDownloadOnly/v1.30.3/DeleteAll 0.23
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.21
21 TestDownloadOnly/v1.31.0-beta.0/json-events 18.1
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.38
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.21
30 TestBinaryMirror 1.01
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
36 TestAddons/Setup 231.19
38 TestAddons/serial/Volcano 39.23
40 TestAddons/serial/GCPAuth/Namespaces 0.1
42 TestAddons/parallel/Registry 14.95
43 TestAddons/parallel/Ingress 19.87
44 TestAddons/parallel/InspektorGadget 10.52
45 TestAddons/parallel/MetricsServer 6.73
46 TestAddons/parallel/HelmTiller 10.13
48 TestAddons/parallel/CSI 38.06
49 TestAddons/parallel/Headlamp 20.41
50 TestAddons/parallel/CloudSpanner 5.41
51 TestAddons/parallel/LocalPath 52.42
52 TestAddons/parallel/NvidiaDevicePlugin 5.4
53 TestAddons/parallel/Yakd 10.48
54 TestAddons/StoppedEnableDisable 5.91
62 TestHyperKitDriverInstallOrUpdate 8.99
65 TestErrorSpam/setup 35.17
66 TestErrorSpam/start 1.47
67 TestErrorSpam/status 0.49
68 TestErrorSpam/pause 1.38
69 TestErrorSpam/unpause 1.38
70 TestErrorSpam/stop 155.81
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 62.82
75 TestFunctional/serial/AuditLog 0
77 TestFunctional/serial/KubeContext 0.04
81 TestFunctional/serial/CacheCmd/cache/add_remote 360.2
82 TestFunctional/serial/CacheCmd/cache/add_local 60.32
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
84 TestFunctional/serial/CacheCmd/cache/list 0.08
87 TestFunctional/serial/CacheCmd/cache/delete 0.16
90 TestFunctional/serial/ExtraConfig 86.53
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 2.89
93 TestFunctional/serial/LogsFileCmd 2.79
94 TestFunctional/serial/InvalidService 4.56
96 TestFunctional/parallel/ConfigCmd 0.47
97 TestFunctional/parallel/DashboardCmd 13.34
98 TestFunctional/parallel/DryRun 1.81
99 TestFunctional/parallel/InternationalLanguage 0.59
100 TestFunctional/parallel/StatusCmd 0.54
104 TestFunctional/parallel/ServiceCmdConnect 9.8
105 TestFunctional/parallel/AddonsCmd 0.22
106 TestFunctional/parallel/PersistentVolumeClaim 31.02
108 TestFunctional/parallel/SSHCmd 0.3
109 TestFunctional/parallel/CpCmd 1.14
110 TestFunctional/parallel/MySQL 31.2
111 TestFunctional/parallel/FileSync 0.24
112 TestFunctional/parallel/CertSync 1.15
116 TestFunctional/parallel/NodeLabels 0.07
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.18
120 TestFunctional/parallel/License 0.58
121 TestFunctional/parallel/Version/short 0.1
122 TestFunctional/parallel/Version/components 0.45
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.17
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.19
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.17
127 TestFunctional/parallel/ImageCommands/ImageBuild 2.93
128 TestFunctional/parallel/ImageCommands/Setup 1.76
129 TestFunctional/parallel/DockerEnv/bash 0.73
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.13
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.65
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.36
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.39
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.07
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
140 TestFunctional/parallel/ServiceCmd/DeployApp 24.21
141 TestFunctional/parallel/ServiceCmd/List 0.22
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.25
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
144 TestFunctional/parallel/ServiceCmd/Format 0.28
145 TestFunctional/parallel/ServiceCmd/URL 0.27
147 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
148 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
150 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.17
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.29
158 TestFunctional/parallel/ProfileCmd/profile_list 0.31
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
160 TestFunctional/parallel/MountCmd/any-port 7.29
161 TestFunctional/parallel/MountCmd/specific-port 1.45
162 TestFunctional/parallel/MountCmd/VerifyCleanup 2.37
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestMultiControlPlane/serial/StartCluster 210.53
170 TestMultiControlPlane/serial/DeployApp 5.81
171 TestMultiControlPlane/serial/PingHostFromPods 1.32
172 TestMultiControlPlane/serial/AddWorkerNode 49.98
173 TestMultiControlPlane/serial/NodeLabels 0.05
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.34
175 TestMultiControlPlane/serial/CopyFile 9.2
176 TestMultiControlPlane/serial/StopSecondaryNode 8.71
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.28
178 TestMultiControlPlane/serial/RestartSecondaryNode 42.78
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.35
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 229.83
181 TestMultiControlPlane/serial/DeleteSecondaryNode 8.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.26
183 TestMultiControlPlane/serial/StopCluster 24.99
190 TestImageBuild/serial/Setup 38.75
191 TestImageBuild/serial/NormalBuild 1.57
192 TestImageBuild/serial/BuildWithBuildArg 0.7
193 TestImageBuild/serial/BuildWithDockerIgnore 0.64
194 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.6
198 TestJSONOutput/start/Command 51.7
199 TestJSONOutput/start/Audit 0
201 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/pause/Command 0.48
205 TestJSONOutput/pause/Audit 0
207 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/unpause/Command 0.46
211 TestJSONOutput/unpause/Audit 0
213 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
216 TestJSONOutput/stop/Command 8.34
217 TestJSONOutput/stop/Audit 0
219 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
220 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
221 TestErrorJSONOutput 0.57
226 TestMainNoArgs 0.08
227 TestMinikubeProfile 92.7
233 TestMultiNode/serial/FreshStart2Nodes 113.62
234 TestMultiNode/serial/DeployApp2Nodes 4.24
235 TestMultiNode/serial/PingHostFrom2Pods 0.87
237 TestMultiNode/serial/MultiNodeLabels 0.05
238 TestMultiNode/serial/ProfileList 0.18
240 TestMultiNode/serial/StopNode 8.83
241 TestMultiNode/serial/StartAfterStop 146.65
243 TestMultiNode/serial/DeleteNode 6.09
244 TestMultiNode/serial/StopMultiNode 16.77
245 TestMultiNode/serial/RestartMultiNode 101.76
246 TestMultiNode/serial/ValidateNameConflict 42.44
253 TestSkaffold 112.8
256 TestRunningBinaryUpgrade 82.44
271 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.63
272 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 7.39
273 TestStoppedBinaryUpgrade/Setup 1.28
274 TestStoppedBinaryUpgrade/Upgrade 697.59
277 TestStoppedBinaryUpgrade/MinikubeLogs 2.71
286 TestNoKubernetes/serial/StartNoK8sWithVersion 0.45
287 TestNoKubernetes/serial/StartWithK8s 70.68
289 TestNoKubernetes/serial/StartWithStopK8s 8.66
290 TestNoKubernetes/serial/Start 22.11
293 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
294 TestNoKubernetes/serial/ProfileList 0.49
295 TestNoKubernetes/serial/Stop 2.42
296 TestNoKubernetes/serial/StartNoArgs 19.39
300 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (22.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-932000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-932000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (22.706487577s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (22.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-932000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-932000: exit status 85 (302.457374ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-932000 | jenkins | v1.33.1 | 28 Jul 24 17:45 PDT |          |
	|         | -p download-only-932000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/28 17:45:58
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 17:45:58.223410    1535 out.go:291] Setting OutFile to fd 1 ...
	I0728 17:45:58.223625    1535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:45:58.223631    1535 out.go:304] Setting ErrFile to fd 2...
	I0728 17:45:58.223634    1535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:45:58.223831    1535 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	W0728 17:45:58.223935    1535 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19312-1006/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19312-1006/.minikube/config/config.json: no such file or directory
	I0728 17:45:58.225855    1535 out.go:298] Setting JSON to true
	I0728 17:45:58.249516    1535 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":929,"bootTime":1722213029,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 17:45:58.249606    1535 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 17:45:58.271342    1535 out.go:97] [download-only-932000] minikube v1.33.1 on Darwin 14.5
	I0728 17:45:58.271545    1535 notify.go:220] Checking for updates...
	W0728 17:45:58.271582    1535 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball: no such file or directory
	I0728 17:45:58.292957    1535 out.go:169] MINIKUBE_LOCATION=19312
	I0728 17:45:58.313969    1535 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 17:45:58.336141    1535 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 17:45:58.356815    1535 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 17:45:58.377788    1535 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	W0728 17:45:58.420112    1535 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0728 17:45:58.420604    1535 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 17:45:58.469982    1535 out.go:97] Using the hyperkit driver based on user configuration
	I0728 17:45:58.470024    1535 start.go:297] selected driver: hyperkit
	I0728 17:45:58.470035    1535 start.go:901] validating driver "hyperkit" against <nil>
	I0728 17:45:58.470200    1535 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:45:58.470460    1535 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 17:45:58.888203    1535 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 17:45:58.893520    1535 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:45:58.893541    1535 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 17:45:58.893566    1535 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 17:45:58.897674    1535 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0728 17:45:58.898574    1535 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0728 17:45:58.898607    1535 cni.go:84] Creating CNI manager for ""
	I0728 17:45:58.898621    1535 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0728 17:45:58.898683    1535 start.go:340] cluster config:
	{Name:download-only-932000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-932000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:45:58.898908    1535 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:45:58.919926    1535 out.go:97] Downloading VM boot image ...
	I0728 17:45:58.920006    1535 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0728 17:46:07.750916    1535 out.go:97] Starting "download-only-932000" primary control-plane node in "download-only-932000" cluster
	I0728 17:46:07.750953    1535 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0728 17:46:07.807136    1535 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0728 17:46:07.807172    1535 cache.go:56] Caching tarball of preloaded images
	I0728 17:46:07.807774    1535 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0728 17:46:07.827750    1535 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0728 17:46:07.827771    1535 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0728 17:46:07.910616    1535 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0728 17:46:14.635424    1535 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0728 17:46:14.635613    1535 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0728 17:46:15.184395    1535 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0728 17:46:15.184639    1535 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/download-only-932000/config.json ...
	I0728 17:46:15.184664    1535 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/download-only-932000/config.json: {Name:mk98deed25a617b11ce3319b871e13b3b0e25d6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 17:46:15.185240    1535 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0728 17:46:15.185563    1535 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-932000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-932000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-932000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (15.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-193000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-193000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperkit : (15.602225245s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (15.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-193000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-193000: exit status 85 (292.959404ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-932000 | jenkins | v1.33.1 | 28 Jul 24 17:45 PDT |                     |
	|         | -p download-only-932000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT | 28 Jul 24 17:46 PDT |
	| delete  | -p download-only-932000        | download-only-932000 | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT | 28 Jul 24 17:46 PDT |
	| start   | -o=json --download-only        | download-only-193000 | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT |                     |
	|         | -p download-only-193000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/28 17:46:21
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 17:46:21.717522    1561 out.go:291] Setting OutFile to fd 1 ...
	I0728 17:46:21.717799    1561 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:46:21.717805    1561 out.go:304] Setting ErrFile to fd 2...
	I0728 17:46:21.717808    1561 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:46:21.717988    1561 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 17:46:21.719422    1561 out.go:298] Setting JSON to true
	I0728 17:46:21.743647    1561 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":952,"bootTime":1722213029,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 17:46:21.743725    1561 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 17:46:21.765236    1561 out.go:97] [download-only-193000] minikube v1.33.1 on Darwin 14.5
	I0728 17:46:21.765397    1561 notify.go:220] Checking for updates...
	I0728 17:46:21.786346    1561 out.go:169] MINIKUBE_LOCATION=19312
	I0728 17:46:21.807136    1561 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 17:46:21.828135    1561 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 17:46:21.849467    1561 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 17:46:21.870230    1561 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	W0728 17:46:21.912223    1561 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0728 17:46:21.912729    1561 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 17:46:21.942147    1561 out.go:97] Using the hyperkit driver based on user configuration
	I0728 17:46:21.942233    1561 start.go:297] selected driver: hyperkit
	I0728 17:46:21.942248    1561 start.go:901] validating driver "hyperkit" against <nil>
	I0728 17:46:21.942451    1561 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:46:21.942720    1561 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 17:46:21.952770    1561 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 17:46:21.957218    1561 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:46:21.957240    1561 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 17:46:21.957262    1561 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 17:46:21.960198    1561 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0728 17:46:21.960366    1561 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0728 17:46:21.960415    1561 cni.go:84] Creating CNI manager for ""
	I0728 17:46:21.960433    1561 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 17:46:21.960443    1561 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 17:46:21.960500    1561 start.go:340] cluster config:
	{Name:download-only-193000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:46:21.960589    1561 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:46:21.981333    1561 out.go:97] Starting "download-only-193000" primary control-plane node in "download-only-193000" cluster
	I0728 17:46:21.981347    1561 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 17:46:22.047200    1561 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 17:46:22.047280    1561 cache.go:56] Caching tarball of preloaded images
	I0728 17:46:22.047722    1561 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 17:46:22.069263    1561 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0728 17:46:22.069281    1561 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0728 17:46:22.153034    1561 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0728 17:46:31.042556    1561 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0728 17:46:31.042767    1561 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0728 17:46:31.535424    1561 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 17:46:31.535674    1561 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/download-only-193000/config.json ...
	I0728 17:46:31.535700    1561 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/download-only-193000/config.json: {Name:mkc6a1a23aecf534c6353311087b97d004e3b9d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 17:46:31.537726    1561 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 17:46:31.537962    1561 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/darwin/amd64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-193000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-193000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-193000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (18.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-924000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-924000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperkit : (18.097597895s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (18.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-924000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-924000: exit status 85 (376.734216ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-932000 | jenkins | v1.33.1 | 28 Jul 24 17:45 PDT |                     |
	|         | -p download-only-932000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT | 28 Jul 24 17:46 PDT |
	| delete  | -p download-only-932000             | download-only-932000 | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT | 28 Jul 24 17:46 PDT |
	| start   | -o=json --download-only             | download-only-193000 | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT |                     |
	|         | -p download-only-193000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT | 28 Jul 24 17:46 PDT |
	| delete  | -p download-only-193000             | download-only-193000 | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT | 28 Jul 24 17:46 PDT |
	| start   | -o=json --download-only             | download-only-924000 | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT |                     |
	|         | -p download-only-924000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/28 17:46:38
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 17:46:38.054218    1585 out.go:291] Setting OutFile to fd 1 ...
	I0728 17:46:38.054403    1585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:46:38.054409    1585 out.go:304] Setting ErrFile to fd 2...
	I0728 17:46:38.054413    1585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:46:38.054588    1585 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 17:46:38.055968    1585 out.go:298] Setting JSON to true
	I0728 17:46:38.082415    1585 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":969,"bootTime":1722213029,"procs":425,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 17:46:38.082503    1585 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 17:46:38.103534    1585 out.go:97] [download-only-924000] minikube v1.33.1 on Darwin 14.5
	I0728 17:46:38.103618    1585 notify.go:220] Checking for updates...
	I0728 17:46:38.124828    1585 out.go:169] MINIKUBE_LOCATION=19312
	I0728 17:46:38.145897    1585 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 17:46:38.166733    1585 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 17:46:38.187898    1585 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 17:46:38.208875    1585 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	W0728 17:46:38.250744    1585 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0728 17:46:38.251206    1585 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 17:46:38.280817    1585 out.go:97] Using the hyperkit driver based on user configuration
	I0728 17:46:38.280897    1585 start.go:297] selected driver: hyperkit
	I0728 17:46:38.280910    1585 start.go:901] validating driver "hyperkit" against <nil>
	I0728 17:46:38.281112    1585 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:46:38.281311    1585 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19312-1006/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0728 17:46:38.291043    1585 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0728 17:46:38.295464    1585 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 17:46:38.295484    1585 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0728 17:46:38.295510    1585 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 17:46:38.298390    1585 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0728 17:46:38.298549    1585 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0728 17:46:38.298600    1585 cni.go:84] Creating CNI manager for ""
	I0728 17:46:38.298620    1585 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 17:46:38.298630    1585 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 17:46:38.298712    1585 start.go:340] cluster config:
	{Name:download-only-924000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-924000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:46:38.298808    1585 iso.go:125] acquiring lock: {Name:mk932505dbfc2f0b0ea7f6d1a1a65b0594944bb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:46:38.319829    1585 out.go:97] Starting "download-only-924000" primary control-plane node in "download-only-924000" cluster
	I0728 17:46:38.319856    1585 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0728 17:46:38.375848    1585 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0728 17:46:38.375890    1585 cache.go:56] Caching tarball of preloaded images
	I0728 17:46:38.376154    1585 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0728 17:46:38.396767    1585 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0728 17:46:38.396784    1585 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0728 17:46:38.474592    1585 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0728 17:46:48.761468    1585 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0728 17:46:48.761650    1585 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0728 17:46:49.226016    1585 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0728 17:46:49.226263    1585 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/download-only-924000/config.json ...
	I0728 17:46:49.226290    1585 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/download-only-924000/config.json: {Name:mk623419af75b6c915c52f3317380f83e0924d5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 17:46:49.228696    1585 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0728 17:46:49.229016    1585 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19312-1006/.minikube/cache/darwin/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-924000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-924000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-924000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (1.01s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-597000 --alsologtostderr --binary-mirror http://127.0.0.1:49638 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-597000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-597000
--- PASS: TestBinaryMirror (1.01s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-967000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-967000: exit status 85 (208.556603ms)

                                                
                                                
-- stdout --
	* Profile "addons-967000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-967000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-967000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-967000: exit status 85 (188.199868ms)

                                                
                                                
-- stdout --
	* Profile "addons-967000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-967000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (231.19s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-967000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-967000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m51.194326671s)
--- PASS: TestAddons/Setup (231.19s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.23s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 12.371099ms
addons_test.go:897: volcano-scheduler stabilized in 12.39713ms
addons_test.go:905: volcano-admission stabilized in 12.538649ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-vqzsx" [d1338175-bc06-4dcd-8088-cc03d7f48bae] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004473072s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-cw5lc" [4652f212-6543-4837-a610-45ba1463db28] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004242428s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-rkr87" [73b4a3b9-a355-4c0e-9111-4a92978280e7] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003654024s
addons_test.go:932: (dbg) Run:  kubectl --context addons-967000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-967000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-967000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [7e37d35d-5cb6-45fe-9ddd-9f82036668be] Pending
helpers_test.go:344: "test-job-nginx-0" [7e37d35d-5cb6-45fe-9ddd-9f82036668be] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [7e37d35d-5cb6-45fe-9ddd-9f82036668be] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.004928234s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 -p addons-967000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-amd64 -p addons-967000 addons disable volcano --alsologtostderr -v=1: (9.920278401s)
--- PASS: TestAddons/serial/Volcano (39.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-967000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-967000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.645721ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-pqx7d" [8b2c33fd-50a9-4394-abb9-ffeac21e9600] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005781009s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-h7z5t" [21518e8e-a270-4283-a0a2-c923bf113714] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004869479s
addons_test.go:342: (dbg) Run:  kubectl --context addons-967000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-967000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-967000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.297215751s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-967000 ip
2024/07/28 17:52:01 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-967000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.95s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-967000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-967000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-967000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f9765e8a-2dac-47a4-acb5-10e8128e63ce] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f9765e8a-2dac-47a4-acb5-10e8128e63ce] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003117189s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-967000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-967000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-967000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-967000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-amd64 -p addons-967000 addons disable ingress-dns --alsologtostderr -v=1: (1.507693371s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-967000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-967000 addons disable ingress --alsologtostderr -v=1: (7.471765179s)
--- PASS: TestAddons/parallel/Ingress (19.87s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.52s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-787fw" [745d72cd-ad22-463e-8065-3705781339ec] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0034473s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-967000
addons_test.go:851: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-967000: (5.515971698s)
--- PASS: TestAddons/parallel/InspektorGadget (10.52s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.73s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.536959ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-7j5lm" [136c6c77-2626-485a-b1b8-939e98f45943] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003609377s
addons_test.go:417: (dbg) Run:  kubectl --context addons-967000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-967000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.73s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.13s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.68481ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-ndbqb" [a1573c41-b5d4-483e-ad1a-fa89d1ecef1f] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004951419s
addons_test.go:475: (dbg) Run:  kubectl --context addons-967000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-967000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.711716851s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-967000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.13s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.993761ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-967000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-967000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d758b1d6-ba93-47ec-8c6b-84864af20da3] Pending
helpers_test.go:344: "task-pv-pod" [d758b1d6-ba93-47ec-8c6b-84864af20da3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d758b1d6-ba93-47ec-8c6b-84864af20da3] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003462242s
addons_test.go:590: (dbg) Run:  kubectl --context addons-967000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-967000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-967000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-967000 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-967000 delete pod task-pv-pod: (1.066336901s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-967000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-967000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-967000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [26c2f281-122e-4d0f-b115-26ce68fb7e26] Pending
helpers_test.go:344: "task-pv-pod-restore" [26c2f281-122e-4d0f-b115-26ce68fb7e26] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [26c2f281-122e-4d0f-b115-26ce68fb7e26] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003604903s
addons_test.go:632: (dbg) Run:  kubectl --context addons-967000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-967000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-967000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-967000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-amd64 -p addons-967000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.492510514s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-amd64 -p addons-967000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (38.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-967000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-4lq57" [766814fd-2af3-482e-87db-1ebfd6f1e1ee] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-4lq57" [766814fd-2af3-482e-87db-1ebfd6f1e1ee] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-4lq57" [766814fd-2af3-482e-87db-1ebfd6f1e1ee] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004720615s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p addons-967000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-amd64 -p addons-967000 addons disable headlamp --alsologtostderr -v=1: (5.443054684s)
--- PASS: TestAddons/parallel/Headlamp (20.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-xhl5j" [6c463869-39e5-42a2-ba17-6ea796d08a6a] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004324242s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-967000
--- PASS: TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.42s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-967000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-967000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-967000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [82c17db5-d949-4e4c-9552-75e3c32530a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [82c17db5-d949-4e4c-9552-75e3c32530a9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [82c17db5-d949-4e4c-9552-75e3c32530a9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002687774s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-967000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-amd64 -p addons-967000 ssh "cat /opt/local-path-provisioner/pvc-763f0b3f-3a84-408e-988e-e89dc26ea2ee_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-967000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-967000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p addons-967000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-amd64 -p addons-967000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.785057521s)
--- PASS: TestAddons/parallel/LocalPath (52.42s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-66fdp" [9a1e8368-e066-4ac1-9eeb-adbf3d278b62] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00453352s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-967000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.40s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-wxwg4" [f90c2f18-92fa-47b7-8867-f59b1aef8d03] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003617827s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 -p addons-967000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-amd64 -p addons-967000 addons disable yakd --alsologtostderr -v=1: (5.477888453s)
--- PASS: TestAddons/parallel/Yakd (10.48s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.91s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-967000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-967000: (5.376015958s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-967000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-967000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-967000
--- PASS: TestAddons/StoppedEnableDisable (5.91s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.99s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.99s)

                                                
                                    
x
+
TestErrorSpam/setup (35.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-292000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-292000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 --driver=hyperkit : (35.171319215s)
--- PASS: TestErrorSpam/setup (35.17s)

                                                
                                    
x
+
TestErrorSpam/start (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 start --dry-run
--- PASS: TestErrorSpam/start (1.47s)

                                                
                                    
x
+
TestErrorSpam/status (0.49s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 status
--- PASS: TestErrorSpam/status (0.49s)

                                                
                                    
x
+
TestErrorSpam/pause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 pause
--- PASS: TestErrorSpam/pause (1.38s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 unpause
--- PASS: TestErrorSpam/unpause (1.38s)

                                                
                                    
x
+
TestErrorSpam/stop (155.81s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 stop: (5.372837992s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 stop: (1m15.227568297s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 stop
E0728 17:55:49.986617    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 17:55:49.994811    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 17:55:50.005949    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 17:55:50.028204    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 17:55:50.069278    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 17:55:50.150276    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 17:55:50.310586    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 17:55:50.633158    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 17:55:51.275684    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 17:55:52.559186    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 17:55:55.122947    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 17:56:00.248523    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 17:56:10.494844    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 17:56:30.980280    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-292000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-292000 stop: (1m15.204542027s)
--- PASS: TestErrorSpam/stop (155.81s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19312-1006/.minikube/files/etc/test/nested/copy/1533/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (62.82s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-596000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0728 17:57:11.942795    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-596000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m2.815503996s)
--- PASS: TestFunctional/serial/StartWithProxy (62.82s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (360.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-596000 cache add registry.k8s.io/pause:3.1: (1m59.463168953s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 cache add registry.k8s.io/pause:3.3
E0728 18:05:50.005773    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-596000 cache add registry.k8s.io/pause:3.3: (2m0.367735979s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-596000 cache add registry.k8s.io/pause:latest: (2m0.367203605s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (360.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (60.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-596000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1922038393/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 cache add minikube-local-cache-test:functional-596000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-amd64 -p functional-596000 cache add minikube-local-cache-test:functional-596000: (59.894183195s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 cache delete minikube-local-cache-test:functional-596000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-596000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (60.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (86.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-596000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-596000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m26.527192542s)
functional_test.go:761: restart took 1m26.527300802s for "functional-596000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (86.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-596000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-596000 logs: (2.888556156s)
--- PASS: TestFunctional/serial/LogsCmd (2.89s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3022355802/001/logs.txt
E0728 18:20:50.038622    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-596000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3022355802/001/logs.txt: (2.793462178s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.79s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.56s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-596000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-596000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-596000: exit status 115 (291.287692ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:31005 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-596000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-596000 delete -f testdata/invalidsvc.yaml: (1.101476032s)
--- PASS: TestFunctional/serial/InvalidService (4.56s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-596000 config get cpus: exit status 14 (55.143176ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-596000 config get cpus: exit status 14 (54.532503ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-596000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-596000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3230: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.34s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-596000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-596000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (1.317892835s)

                                                
                                                
-- stdout --
	* [functional-596000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:22:03.226974    3140 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:22:03.269543    3140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:22:03.269564    3140 out.go:304] Setting ErrFile to fd 2...
	I0728 18:22:03.269574    3140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:22:03.269958    3140 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 18:22:03.311016    3140 out.go:298] Setting JSON to false
	I0728 18:22:03.334543    3140 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3094,"bootTime":1722213029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 18:22:03.334627    3140 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:22:03.450947    3140 out.go:177] * [functional-596000] minikube v1.33.1 on Darwin 14.5
	I0728 18:22:03.530015    3140 notify.go:220] Checking for updates...
	I0728 18:22:03.591713    3140 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:22:03.690897    3140 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:22:03.773512    3140 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 18:22:03.855748    3140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:22:03.938793    3140 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 18:22:04.041970    3140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:22:04.105498    3140 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:22:04.106179    3140 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:22:04.106264    3140 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:22:04.115934    3140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50965
	I0728 18:22:04.116336    3140 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:22:04.116746    3140 main.go:141] libmachine: Using API Version  1
	I0728 18:22:04.116757    3140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:22:04.116966    3140 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:22:04.117078    3140 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 18:22:04.117317    3140 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:22:04.117588    3140 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:22:04.117616    3140 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:22:04.126112    3140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50967
	I0728 18:22:04.126481    3140 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:22:04.126803    3140 main.go:141] libmachine: Using API Version  1
	I0728 18:22:04.126814    3140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:22:04.127035    3140 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:22:04.127160    3140 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 18:22:04.155535    3140 out.go:177] * Using the hyperkit driver based on existing profile
	I0728 18:22:04.196775    3140 start.go:297] selected driver: hyperkit
	I0728 18:22:04.196795    3140 start.go:901] validating driver "hyperkit" against &{Name:functional-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:22:04.196958    3140 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:22:04.220672    3140 out.go:177] 
	W0728 18:22:04.241718    3140 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0728 18:22:04.262851    3140 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-596000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-596000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-596000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (587.081717ms)

                                                
                                                
-- stdout --
	* [functional-596000] minikube v1.33.1 sur Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:22:04.843944    3175 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:22:04.844113    3175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:22:04.844118    3175 out.go:304] Setting ErrFile to fd 2...
	I0728 18:22:04.844121    3175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:22:04.844326    3175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 18:22:04.845994    3175 out.go:298] Setting JSON to false
	I0728 18:22:04.869795    3175 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3095,"bootTime":1722213029,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0728 18:22:04.869882    3175 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:22:04.890843    3175 out.go:177] * [functional-596000] minikube v1.33.1 sur Darwin 14.5
	I0728 18:22:04.932816    3175 notify.go:220] Checking for updates...
	I0728 18:22:04.954714    3175 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:22:04.995503    3175 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	I0728 18:22:05.016481    3175 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 18:22:05.058675    3175 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:22:05.100568    3175 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	I0728 18:22:05.142529    3175 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:22:05.180448    3175 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:22:05.181161    3175 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:22:05.181254    3175 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:22:05.190715    3175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50996
	I0728 18:22:05.191338    3175 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:22:05.191752    3175 main.go:141] libmachine: Using API Version  1
	I0728 18:22:05.191765    3175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:22:05.192011    3175 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:22:05.192114    3175 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 18:22:05.192317    3175 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:22:05.192573    3175 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:22:05.192612    3175 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:22:05.200829    3175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50998
	I0728 18:22:05.201152    3175 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:22:05.201508    3175 main.go:141] libmachine: Using API Version  1
	I0728 18:22:05.201522    3175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:22:05.201729    3175 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:22:05.201845    3175 main.go:141] libmachine: (functional-596000) Calling .DriverName
	I0728 18:22:05.230583    3175 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0728 18:22:05.271834    3175 start.go:297] selected driver: hyperkit
	I0728 18:22:05.271864    3175 start.go:901] validating driver "hyperkit" against &{Name:functional-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:22:05.272083    3175 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:22:05.297545    3175 out.go:177] 
	W0728 18:22:05.318484    3175 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0728 18:22:05.339481    3175 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-596000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-596000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-vhdv4" [f888f6a5-1bcc-4c24-b4c6-3a03ad88195e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-vhdv4" [f888f6a5-1bcc-4c24-b4c6-3a03ad88195e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.006309615s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.169.0.4:31287
functional_test.go:1675: http://192.169.0.4:31287: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-vhdv4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:31287
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.80s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f574dc83-f9f7-4906-97f5-b280bec73590] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009493192s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-596000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-596000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-596000 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-596000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-596000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [23c9e422-1423-4ad7-89a8-50e8066d1e3e] Pending
helpers_test.go:344: "sp-pod" [23c9e422-1423-4ad7-89a8-50e8066d1e3e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [23c9e422-1423-4ad7-89a8-50e8066d1e3e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.005608593s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-596000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-596000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-596000 delete -f testdata/storage-provisioner/pod.yaml: (1.20567483s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-596000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [88af2060-f9b8-4f9c-b934-61c1a8953d7b] Pending
helpers_test.go:344: "sp-pod" [88af2060-f9b8-4f9c-b934-61c1a8953d7b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [88af2060-f9b8-4f9c-b934-61c1a8953d7b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00695259s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-596000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh -n functional-596000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 cp functional-596000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd1665237408/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh -n functional-596000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh -n functional-596000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-596000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-lxkkz" [0106c318-4484-4a7f-b053-c79709b4a851] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-lxkkz" [0106c318-4484-4a7f-b053-c79709b4a851] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.00436217s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-596000 exec mysql-64454c8b5c-lxkkz -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-596000 exec mysql-64454c8b5c-lxkkz -- mysql -ppassword -e "show databases;": exit status 1 (187.579624ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-596000 exec mysql-64454c8b5c-lxkkz -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-596000 exec mysql-64454c8b5c-lxkkz -- mysql -ppassword -e "show databases;": exit status 1 (179.468685ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-596000 exec mysql-64454c8b5c-lxkkz -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-596000 exec mysql-64454c8b5c-lxkkz -- mysql -ppassword -e "show databases;": exit status 1 (141.642083ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-596000 exec mysql-64454c8b5c-lxkkz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.20s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1533/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "sudo cat /etc/test/nested/copy/1533/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1533.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "sudo cat /etc/ssl/certs/1533.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1533.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "sudo cat /usr/share/ca-certificates/1533.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/15332.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "sudo cat /etc/ssl/certs/15332.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/15332.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "sudo cat /usr/share/ca-certificates/15332.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-596000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-596000 ssh "sudo systemctl is-active crio": exit status 1 (181.386888ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-596000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:3.9
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kicbase/echo-server:functional-596000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-596000 image ls --format short --alsologtostderr:
I0728 18:22:06.618747    3213 out.go:291] Setting OutFile to fd 1 ...
I0728 18:22:06.619029    3213 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 18:22:06.619035    3213 out.go:304] Setting ErrFile to fd 2...
I0728 18:22:06.619038    3213 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 18:22:06.619234    3213 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
I0728 18:22:06.619856    3213 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 18:22:06.619950    3213 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 18:22:06.620313    3213 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0728 18:22:06.620357    3213 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0728 18:22:06.628781    3213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51028
I0728 18:22:06.629227    3213 main.go:141] libmachine: () Calling .GetVersion
I0728 18:22:06.629650    3213 main.go:141] libmachine: Using API Version  1
I0728 18:22:06.629680    3213 main.go:141] libmachine: () Calling .SetConfigRaw
I0728 18:22:06.629917    3213 main.go:141] libmachine: () Calling .GetMachineName
I0728 18:22:06.630036    3213 main.go:141] libmachine: (functional-596000) Calling .GetState
I0728 18:22:06.630120    3213 main.go:141] libmachine: (functional-596000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0728 18:22:06.630198    3213 main.go:141] libmachine: (functional-596000) DBG | hyperkit pid from json: 2051
I0728 18:22:06.631488    3213 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0728 18:22:06.631512    3213 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0728 18:22:06.640285    3213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51030
I0728 18:22:06.640652    3213 main.go:141] libmachine: () Calling .GetVersion
I0728 18:22:06.640998    3213 main.go:141] libmachine: Using API Version  1
I0728 18:22:06.641008    3213 main.go:141] libmachine: () Calling .SetConfigRaw
I0728 18:22:06.641240    3213 main.go:141] libmachine: () Calling .GetMachineName
I0728 18:22:06.641359    3213 main.go:141] libmachine: (functional-596000) Calling .DriverName
I0728 18:22:06.641556    3213 ssh_runner.go:195] Run: systemctl --version
I0728 18:22:06.641573    3213 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
I0728 18:22:06.641652    3213 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
I0728 18:22:06.641737    3213 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
I0728 18:22:06.641834    3213 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
I0728 18:22:06.641932    3213 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
I0728 18:22:06.683327    3213 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0728 18:22:06.755646    3213 main.go:141] libmachine: Making call to close driver server
I0728 18:22:06.755654    3213 main.go:141] libmachine: (functional-596000) Calling .Close
I0728 18:22:06.755833    3213 main.go:141] libmachine: Successfully made call to close driver server
I0728 18:22:06.755843    3213 main.go:141] libmachine: Making call to close connection to plugin binary
I0728 18:22:06.755847    3213 main.go:141] libmachine: Making call to close driver server
I0728 18:22:06.755850    3213 main.go:141] libmachine: (functional-596000) DBG | Closing plugin on server side
I0728 18:22:06.755852    3213 main.go:141] libmachine: (functional-596000) Calling .Close
I0728 18:22:06.756104    3213 main.go:141] libmachine: Successfully made call to close driver server
I0728 18:22:06.756113    3213 main.go:141] libmachine: Making call to close connection to plugin binary
I0728 18:22:06.756131    3213 main.go:141] libmachine: (functional-596000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-596000 image ls --format table --alsologtostderr:
|-----------------------------------------|-------------------|---------------|--------|
|                  Image                  |        Tag        |   Image ID    |  Size  |
|-----------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                   | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.30.3           | 1f6d574d502f3 | 117MB  |
| docker.io/library/nginx                 | alpine            | 1ae23480369fa | 43.2MB |
| registry.k8s.io/kube-scheduler          | v1.30.3           | 3edc18e7b7672 | 62MB   |
| docker.io/library/nginx                 | latest            | a72860cb95fd5 | 188MB  |
| docker.io/kicbase/echo-server           | functional-596000 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver              | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-controller-manager | v1.30.3           | 76932a3b37d7e | 111MB  |
| registry.k8s.io/kube-proxy              | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| registry.k8s.io/coredns/coredns         | v1.11.1           | cbb01a7bd410d | 59.8MB |
| localhost/my-image                      | functional-596000 | 6ad4ea9a2aa21 | 1.24MB |
| docker.io/library/mysql                 | 5.7               | 5107333e08a87 | 501MB  |
|-----------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-596000 image ls --format table --alsologtostderr:
I0728 18:22:10.139810    3239 out.go:291] Setting OutFile to fd 1 ...
I0728 18:22:10.140432    3239 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 18:22:10.140446    3239 out.go:304] Setting ErrFile to fd 2...
I0728 18:22:10.140456    3239 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 18:22:10.141038    3239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
I0728 18:22:10.141632    3239 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 18:22:10.141726    3239 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 18:22:10.142060    3239 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0728 18:22:10.142101    3239 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0728 18:22:10.150693    3239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51086
I0728 18:22:10.151158    3239 main.go:141] libmachine: () Calling .GetVersion
I0728 18:22:10.151576    3239 main.go:141] libmachine: Using API Version  1
I0728 18:22:10.151590    3239 main.go:141] libmachine: () Calling .SetConfigRaw
I0728 18:22:10.151815    3239 main.go:141] libmachine: () Calling .GetMachineName
I0728 18:22:10.151918    3239 main.go:141] libmachine: (functional-596000) Calling .GetState
I0728 18:22:10.152004    3239 main.go:141] libmachine: (functional-596000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0728 18:22:10.152074    3239 main.go:141] libmachine: (functional-596000) DBG | hyperkit pid from json: 2051
I0728 18:22:10.153380    3239 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0728 18:22:10.153408    3239 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0728 18:22:10.161942    3239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51088
I0728 18:22:10.162298    3239 main.go:141] libmachine: () Calling .GetVersion
I0728 18:22:10.162675    3239 main.go:141] libmachine: Using API Version  1
I0728 18:22:10.162689    3239 main.go:141] libmachine: () Calling .SetConfigRaw
I0728 18:22:10.162925    3239 main.go:141] libmachine: () Calling .GetMachineName
I0728 18:22:10.163031    3239 main.go:141] libmachine: (functional-596000) Calling .DriverName
I0728 18:22:10.163220    3239 ssh_runner.go:195] Run: systemctl --version
I0728 18:22:10.163238    3239 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
I0728 18:22:10.163326    3239 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
I0728 18:22:10.163410    3239 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
I0728 18:22:10.163487    3239 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
I0728 18:22:10.163597    3239 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
I0728 18:22:10.195006    3239 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0728 18:22:10.229763    3239 main.go:141] libmachine: Making call to close driver server
I0728 18:22:10.229773    3239 main.go:141] libmachine: (functional-596000) Calling .Close
I0728 18:22:10.229922    3239 main.go:141] libmachine: Successfully made call to close driver server
I0728 18:22:10.229931    3239 main.go:141] libmachine: Making call to close connection to plugin binary
I0728 18:22:10.229939    3239 main.go:141] libmachine: Making call to close driver server
I0728 18:22:10.229945    3239 main.go:141] libmachine: (functional-596000) Calling .Close
I0728 18:22:10.230122    3239 main.go:141] libmachine: (functional-596000) DBG | Closing plugin on server side
I0728 18:22:10.230149    3239 main.go:141] libmachine: Successfully made call to close driver server
I0728 18:22:10.230161    3239 main.go:141] libmachine: Making call to close connection to plugin binary
2024/07/28 18:22:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-596000 image ls --format json --alsologtostderr:
[{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-596000"],"size":"4940000"},{"id":"1f6d574d502f3b
61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"6ad4ea9a2aa212896106e0fbe00f0a21c27d8dc690146ddc9c0ea6135c8cf5ec","repoDigests":[],"repoTags":["localhost/my-image:functional-596000"],"size":"1240000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags
":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-596000 image ls --format json --alsologtostderr:
I0728 18:22:09.956442    3235 out.go:291] Setting OutFile to fd 1 ...
I0728 18:22:09.956759    3235 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 18:22:09.956766    3235 out.go:304] Setting ErrFile to fd 2...
I0728 18:22:09.956770    3235 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 18:22:09.956985    3235 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
I0728 18:22:09.957682    3235 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 18:22:09.957796    3235 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 18:22:09.958222    3235 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0728 18:22:09.958268    3235 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0728 18:22:09.967633    3235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51081
I0728 18:22:09.968172    3235 main.go:141] libmachine: () Calling .GetVersion
I0728 18:22:09.968664    3235 main.go:141] libmachine: Using API Version  1
I0728 18:22:09.968711    3235 main.go:141] libmachine: () Calling .SetConfigRaw
I0728 18:22:09.968970    3235 main.go:141] libmachine: () Calling .GetMachineName
I0728 18:22:09.969111    3235 main.go:141] libmachine: (functional-596000) Calling .GetState
I0728 18:22:09.969219    3235 main.go:141] libmachine: (functional-596000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0728 18:22:09.969300    3235 main.go:141] libmachine: (functional-596000) DBG | hyperkit pid from json: 2051
I0728 18:22:09.970803    3235 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0728 18:22:09.970827    3235 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0728 18:22:09.980042    3235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51083
I0728 18:22:09.980437    3235 main.go:141] libmachine: () Calling .GetVersion
I0728 18:22:09.980858    3235 main.go:141] libmachine: Using API Version  1
I0728 18:22:09.980883    3235 main.go:141] libmachine: () Calling .SetConfigRaw
I0728 18:22:09.981111    3235 main.go:141] libmachine: () Calling .GetMachineName
I0728 18:22:09.981222    3235 main.go:141] libmachine: (functional-596000) Calling .DriverName
I0728 18:22:09.981432    3235 ssh_runner.go:195] Run: systemctl --version
I0728 18:22:09.981451    3235 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
I0728 18:22:09.981555    3235 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
I0728 18:22:09.981642    3235 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
I0728 18:22:09.981730    3235 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
I0728 18:22:09.981833    3235 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
I0728 18:22:10.025926    3235 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0728 18:22:10.060589    3235 main.go:141] libmachine: Making call to close driver server
I0728 18:22:10.060599    3235 main.go:141] libmachine: (functional-596000) Calling .Close
I0728 18:22:10.060760    3235 main.go:141] libmachine: Successfully made call to close driver server
I0728 18:22:10.060775    3235 main.go:141] libmachine: Making call to close connection to plugin binary
I0728 18:22:10.060780    3235 main.go:141] libmachine: (functional-596000) DBG | Closing plugin on server side
I0728 18:22:10.060782    3235 main.go:141] libmachine: Making call to close driver server
I0728 18:22:10.060819    3235 main.go:141] libmachine: (functional-596000) Calling .Close
I0728 18:22:10.060991    3235 main.go:141] libmachine: (functional-596000) DBG | Closing plugin on server side
I0728 18:22:10.061016    3235 main.go:141] libmachine: Successfully made call to close driver server
I0728 18:22:10.061030    3235 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-596000 image ls --format yaml --alsologtostderr:
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-596000
size: "4940000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-596000 image ls --format yaml --alsologtostderr:
I0728 18:22:06.852223    3217 out.go:291] Setting OutFile to fd 1 ...
I0728 18:22:06.852480    3217 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 18:22:06.852485    3217 out.go:304] Setting ErrFile to fd 2...
I0728 18:22:06.852489    3217 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 18:22:06.852669    3217 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
I0728 18:22:06.853291    3217 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 18:22:06.853382    3217 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 18:22:06.853728    3217 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0728 18:22:06.853772    3217 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0728 18:22:06.862329    3217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51033
I0728 18:22:06.862737    3217 main.go:141] libmachine: () Calling .GetVersion
I0728 18:22:06.863165    3217 main.go:141] libmachine: Using API Version  1
I0728 18:22:06.863174    3217 main.go:141] libmachine: () Calling .SetConfigRaw
I0728 18:22:06.863432    3217 main.go:141] libmachine: () Calling .GetMachineName
I0728 18:22:06.863568    3217 main.go:141] libmachine: (functional-596000) Calling .GetState
I0728 18:22:06.863660    3217 main.go:141] libmachine: (functional-596000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0728 18:22:06.863737    3217 main.go:141] libmachine: (functional-596000) DBG | hyperkit pid from json: 2051
I0728 18:22:06.865060    3217 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0728 18:22:06.865092    3217 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0728 18:22:06.873624    3217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51035
I0728 18:22:06.873982    3217 main.go:141] libmachine: () Calling .GetVersion
I0728 18:22:06.874386    3217 main.go:141] libmachine: Using API Version  1
I0728 18:22:06.874407    3217 main.go:141] libmachine: () Calling .SetConfigRaw
I0728 18:22:06.874635    3217 main.go:141] libmachine: () Calling .GetMachineName
I0728 18:22:06.874751    3217 main.go:141] libmachine: (functional-596000) Calling .DriverName
I0728 18:22:06.874913    3217 ssh_runner.go:195] Run: systemctl --version
I0728 18:22:06.874929    3217 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
I0728 18:22:06.875011    3217 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
I0728 18:22:06.875110    3217 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
I0728 18:22:06.875198    3217 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
I0728 18:22:06.875295    3217 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
I0728 18:22:06.906989    3217 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0728 18:22:06.944384    3217 main.go:141] libmachine: Making call to close driver server
I0728 18:22:06.944392    3217 main.go:141] libmachine: (functional-596000) Calling .Close
I0728 18:22:06.944564    3217 main.go:141] libmachine: Successfully made call to close driver server
I0728 18:22:06.944564    3217 main.go:141] libmachine: (functional-596000) DBG | Closing plugin on server side
I0728 18:22:06.944576    3217 main.go:141] libmachine: Making call to close connection to plugin binary
I0728 18:22:06.944584    3217 main.go:141] libmachine: Making call to close driver server
I0728 18:22:06.944590    3217 main.go:141] libmachine: (functional-596000) Calling .Close
I0728 18:22:06.944777    3217 main.go:141] libmachine: (functional-596000) DBG | Closing plugin on server side
I0728 18:22:06.944833    3217 main.go:141] libmachine: Successfully made call to close driver server
I0728 18:22:06.944858    3217 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-596000 ssh pgrep buildkitd: exit status 1 (128.128512ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image build -t localhost/my-image:functional-596000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-596000 image build -t localhost/my-image:functional-596000 testdata/build --alsologtostderr: (2.616957642s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-596000 image build -t localhost/my-image:functional-596000 testdata/build --alsologtostderr:
I0728 18:22:07.151156    3226 out.go:291] Setting OutFile to fd 1 ...
I0728 18:22:07.165167    3226 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 18:22:07.165178    3226 out.go:304] Setting ErrFile to fd 2...
I0728 18:22:07.165183    3226 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 18:22:07.165435    3226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
I0728 18:22:07.166401    3226 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 18:22:07.167479    3226 config.go:182] Loaded profile config "functional-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 18:22:07.167853    3226 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0728 18:22:07.167894    3226 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0728 18:22:07.176466    3226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51046
I0728 18:22:07.176893    3226 main.go:141] libmachine: () Calling .GetVersion
I0728 18:22:07.177300    3226 main.go:141] libmachine: Using API Version  1
I0728 18:22:07.177309    3226 main.go:141] libmachine: () Calling .SetConfigRaw
I0728 18:22:07.177555    3226 main.go:141] libmachine: () Calling .GetMachineName
I0728 18:22:07.177675    3226 main.go:141] libmachine: (functional-596000) Calling .GetState
I0728 18:22:07.177759    3226 main.go:141] libmachine: (functional-596000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0728 18:22:07.177834    3226 main.go:141] libmachine: (functional-596000) DBG | hyperkit pid from json: 2051
I0728 18:22:07.179126    3226 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0728 18:22:07.179152    3226 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0728 18:22:07.187689    3226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51048
I0728 18:22:07.188063    3226 main.go:141] libmachine: () Calling .GetVersion
I0728 18:22:07.188437    3226 main.go:141] libmachine: Using API Version  1
I0728 18:22:07.188452    3226 main.go:141] libmachine: () Calling .SetConfigRaw
I0728 18:22:07.188661    3226 main.go:141] libmachine: () Calling .GetMachineName
I0728 18:22:07.188779    3226 main.go:141] libmachine: (functional-596000) Calling .DriverName
I0728 18:22:07.188962    3226 ssh_runner.go:195] Run: systemctl --version
I0728 18:22:07.188982    3226 main.go:141] libmachine: (functional-596000) Calling .GetSSHHostname
I0728 18:22:07.189074    3226 main.go:141] libmachine: (functional-596000) Calling .GetSSHPort
I0728 18:22:07.189176    3226 main.go:141] libmachine: (functional-596000) Calling .GetSSHKeyPath
I0728 18:22:07.189283    3226 main.go:141] libmachine: (functional-596000) Calling .GetSSHUsername
I0728 18:22:07.189397    3226 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/functional-596000/id_rsa Username:docker}
I0728 18:22:07.254338    3226 build_images.go:161] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1872081538.tar
I0728 18:22:07.254422    3226 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0728 18:22:07.275371    3226 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1872081538.tar
I0728 18:22:07.283725    3226 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1872081538.tar: stat -c "%s %y" /var/lib/minikube/build/build.1872081538.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1872081538.tar': No such file or directory
I0728 18:22:07.283766    3226 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1872081538.tar --> /var/lib/minikube/build/build.1872081538.tar (3072 bytes)
I0728 18:22:07.351399    3226 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1872081538
I0728 18:22:07.365131    3226 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1872081538 -xf /var/lib/minikube/build/build.1872081538.tar
I0728 18:22:07.376872    3226 docker.go:360] Building image: /var/lib/minikube/build/build.1872081538
I0728 18:22:07.376948    3226 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-596000 /var/lib/minikube/build/build.1872081538
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:6ad4ea9a2aa212896106e0fbe00f0a21c27d8dc690146ddc9c0ea6135c8cf5ec done
#8 naming to localhost/my-image:functional-596000 done
#8 DONE 0.1s
I0728 18:22:09.634851    3226 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-596000 /var/lib/minikube/build/build.1872081538: (2.257883749s)
I0728 18:22:09.634915    3226 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1872081538
I0728 18:22:09.662827    3226 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1872081538.tar
I0728 18:22:09.685903    3226 build_images.go:217] Built localhost/my-image:functional-596000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1872081538.tar
I0728 18:22:09.685933    3226 build_images.go:133] succeeded building to: functional-596000
I0728 18:22:09.685937    3226 build_images.go:134] failed building to: 
I0728 18:22:09.685951    3226 main.go:141] libmachine: Making call to close driver server
I0728 18:22:09.685958    3226 main.go:141] libmachine: (functional-596000) Calling .Close
I0728 18:22:09.686140    3226 main.go:141] libmachine: Successfully made call to close driver server
I0728 18:22:09.686151    3226 main.go:141] libmachine: Making call to close connection to plugin binary
I0728 18:22:09.686159    3226 main.go:141] libmachine: Making call to close driver server
I0728 18:22:09.686159    3226 main.go:141] libmachine: (functional-596000) DBG | Closing plugin on server side
I0728 18:22:09.686166    3226 main.go:141] libmachine: (functional-596000) Calling .Close
I0728 18:22:09.686312    3226 main.go:141] libmachine: (functional-596000) DBG | Closing plugin on server side
I0728 18:22:09.686315    3226 main.go:141] libmachine: Successfully made call to close driver server
I0728 18:22:09.686326    3226 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.716231507s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-596000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-596000 docker-env) && out/minikube-darwin-amd64 status -p functional-596000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-596000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image load --daemon kicbase/echo-server:functional-596000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image load --daemon kicbase/echo-server:functional-596000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-596000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image load --daemon kicbase/echo-server:functional-596000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image save kicbase/echo-server:functional-596000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image rm kicbase/echo-server:functional-596000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-596000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 image save --daemon kicbase/echo-server:functional-596000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-596000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (24.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-596000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-596000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-6p5dt" [1f5c9c89-92b6-4f2c-af38-7b27affa553a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-6p5dt" [1f5c9c89-92b6-4f2c-af38-7b27affa553a] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 24.007303226s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (24.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 service list -o json
functional_test.go:1494: Took "247.062638ms" to run "out/minikube-darwin-amd64 -p functional-596000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.169.0.4:31058
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.169.0.4:31058
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-596000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-596000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-596000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-596000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2945: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-596000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-596000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [eea5ede6-e110-43b8-8b20-03b1f22fba23] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [eea5ede6-e110-43b8-8b20-03b1f22fba23] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.006887647s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-596000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.214.229 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-596000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "233.392928ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "78.327025ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "181.029956ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "76.310471ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-596000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port306094501/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722216114161848000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port306094501/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722216114161848000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port306094501/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722216114161848000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port306094501/001/test-1722216114161848000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-596000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (154.444909ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 01:21 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 01:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 01:21 test-1722216114161848000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh cat /mount-9p/test-1722216114161848000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-596000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [804e42e7-76a6-407c-8cd3-c2146e8f0391] Pending
helpers_test.go:344: "busybox-mount" [804e42e7-76a6-407c-8cd3-c2146e8f0391] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [804e42e7-76a6-407c-8cd3-c2146e8f0391] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [804e42e7-76a6-407c-8cd3-c2146e8f0391] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.009857809s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-596000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-596000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port306094501/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-596000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1431696348/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-596000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (154.669755ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-596000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1431696348/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-596000 ssh "sudo umount -f /mount-9p": exit status 1 (129.05414ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-596000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-596000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1431696348/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-596000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3623513625/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-596000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3623513625/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-596000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3623513625/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-596000 ssh "findmnt -T" /mount1: exit status 1 (186.397799ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-596000 ssh "findmnt -T" /mount1: exit status 1 (298.948287ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-596000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-596000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-596000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3623513625/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-596000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3623513625/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-596000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3623513625/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.37s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-596000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-596000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-596000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (210.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-168000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E0728 18:25:50.039875    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-168000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m30.148077198s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (210.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-168000 -- rollout status deployment/busybox: (3.579133121s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- exec busybox-fc5497c4f-d87xd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- exec busybox-fc5497c4f-g9wpm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- exec busybox-fc5497c4f-j4555 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- exec busybox-fc5497c4f-d87xd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- exec busybox-fc5497c4f-g9wpm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- exec busybox-fc5497c4f-j4555 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- exec busybox-fc5497c4f-d87xd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- exec busybox-fc5497c4f-g9wpm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- exec busybox-fc5497c4f-j4555 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- exec busybox-fc5497c4f-d87xd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- exec busybox-fc5497c4f-d87xd -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- exec busybox-fc5497c4f-g9wpm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0728 18:26:00.956855    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 18:26:00.962438    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 18:26:00.972848    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- exec busybox-fc5497c4f-g9wpm -- sh -c "ping -c 1 192.169.0.1"
E0728 18:26:01.027754    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 18:26:01.069392    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 18:26:01.149534    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- exec busybox-fc5497c4f-j4555 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0728 18:26:01.311064    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-168000 -- exec busybox-fc5497c4f-j4555 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-168000 -v=7 --alsologtostderr
E0728 18:26:01.631873    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 18:26:02.273488    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 18:26:03.554130    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 18:26:06.116108    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 18:26:11.236370    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 18:26:21.476980    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 18:26:41.924551    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-168000 -v=7 --alsologtostderr: (49.516537981s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-168000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (9.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp testdata/cp-test.txt ha-168000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp ha-168000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile2395328239/001/cp-test_ha-168000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp ha-168000:/home/docker/cp-test.txt ha-168000-m02:/home/docker/cp-test_ha-168000_ha-168000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m02 "sudo cat /home/docker/cp-test_ha-168000_ha-168000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp ha-168000:/home/docker/cp-test.txt ha-168000-m03:/home/docker/cp-test_ha-168000_ha-168000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m03 "sudo cat /home/docker/cp-test_ha-168000_ha-168000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp ha-168000:/home/docker/cp-test.txt ha-168000-m04:/home/docker/cp-test_ha-168000_ha-168000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m04 "sudo cat /home/docker/cp-test_ha-168000_ha-168000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp testdata/cp-test.txt ha-168000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp ha-168000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile2395328239/001/cp-test_ha-168000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp ha-168000-m02:/home/docker/cp-test.txt ha-168000:/home/docker/cp-test_ha-168000-m02_ha-168000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000 "sudo cat /home/docker/cp-test_ha-168000-m02_ha-168000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp ha-168000-m02:/home/docker/cp-test.txt ha-168000-m03:/home/docker/cp-test_ha-168000-m02_ha-168000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m03 "sudo cat /home/docker/cp-test_ha-168000-m02_ha-168000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp ha-168000-m02:/home/docker/cp-test.txt ha-168000-m04:/home/docker/cp-test_ha-168000-m02_ha-168000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m04 "sudo cat /home/docker/cp-test_ha-168000-m02_ha-168000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp testdata/cp-test.txt ha-168000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp ha-168000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile2395328239/001/cp-test_ha-168000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp ha-168000-m03:/home/docker/cp-test.txt ha-168000:/home/docker/cp-test_ha-168000-m03_ha-168000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000 "sudo cat /home/docker/cp-test_ha-168000-m03_ha-168000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp ha-168000-m03:/home/docker/cp-test.txt ha-168000-m02:/home/docker/cp-test_ha-168000-m03_ha-168000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m02 "sudo cat /home/docker/cp-test_ha-168000-m03_ha-168000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp ha-168000-m03:/home/docker/cp-test.txt ha-168000-m04:/home/docker/cp-test_ha-168000-m03_ha-168000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m04 "sudo cat /home/docker/cp-test_ha-168000-m03_ha-168000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp testdata/cp-test.txt ha-168000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp ha-168000-m04:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile2395328239/001/cp-test_ha-168000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp ha-168000-m04:/home/docker/cp-test.txt ha-168000:/home/docker/cp-test_ha-168000-m04_ha-168000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000 "sudo cat /home/docker/cp-test_ha-168000-m04_ha-168000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp ha-168000-m04:/home/docker/cp-test.txt ha-168000-m02:/home/docker/cp-test_ha-168000-m04_ha-168000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m02 "sudo cat /home/docker/cp-test_ha-168000-m04_ha-168000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 cp ha-168000-m04:/home/docker/cp-test.txt ha-168000-m03:/home/docker/cp-test_ha-168000-m04_ha-168000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 ssh -n ha-168000-m03 "sudo cat /home/docker/cp-test_ha-168000-m04_ha-168000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (9.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-168000 node stop m02 -v=7 --alsologtostderr: (8.357474132s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-168000 status -v=7 --alsologtostderr: exit status 7 (355.618712ms)

                                                
                                                
-- stdout --
	ha-168000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-168000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-168000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-168000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:27:09.541891    3710 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:27:09.542196    3710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:27:09.542201    3710 out.go:304] Setting ErrFile to fd 2...
	I0728 18:27:09.542205    3710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:27:09.542409    3710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 18:27:09.542595    3710 out.go:298] Setting JSON to false
	I0728 18:27:09.542620    3710 mustload.go:65] Loading cluster: ha-168000
	I0728 18:27:09.542656    3710 notify.go:220] Checking for updates...
	I0728 18:27:09.542950    3710 config.go:182] Loaded profile config "ha-168000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:27:09.542964    3710 status.go:255] checking status of ha-168000 ...
	I0728 18:27:09.543330    3710 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:27:09.543377    3710 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:27:09.552107    3710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51819
	I0728 18:27:09.552448    3710 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:27:09.552865    3710 main.go:141] libmachine: Using API Version  1
	I0728 18:27:09.552874    3710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:27:09.553060    3710 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:27:09.553166    3710 main.go:141] libmachine: (ha-168000) Calling .GetState
	I0728 18:27:09.553251    3710 main.go:141] libmachine: (ha-168000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:27:09.553352    3710 main.go:141] libmachine: (ha-168000) DBG | hyperkit pid from json: 3267
	I0728 18:27:09.554351    3710 status.go:330] ha-168000 host status = "Running" (err=<nil>)
	I0728 18:27:09.554375    3710 host.go:66] Checking if "ha-168000" exists ...
	I0728 18:27:09.554646    3710 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:27:09.554672    3710 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:27:09.564272    3710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51821
	I0728 18:27:09.564623    3710 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:27:09.564969    3710 main.go:141] libmachine: Using API Version  1
	I0728 18:27:09.564990    3710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:27:09.565205    3710 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:27:09.565316    3710 main.go:141] libmachine: (ha-168000) Calling .GetIP
	I0728 18:27:09.565414    3710 host.go:66] Checking if "ha-168000" exists ...
	I0728 18:27:09.565664    3710 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:27:09.565700    3710 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:27:09.574878    3710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51823
	I0728 18:27:09.575214    3710 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:27:09.575538    3710 main.go:141] libmachine: Using API Version  1
	I0728 18:27:09.575558    3710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:27:09.575773    3710 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:27:09.575877    3710 main.go:141] libmachine: (ha-168000) Calling .DriverName
	I0728 18:27:09.576026    3710 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 18:27:09.576054    3710 main.go:141] libmachine: (ha-168000) Calling .GetSSHHostname
	I0728 18:27:09.576135    3710 main.go:141] libmachine: (ha-168000) Calling .GetSSHPort
	I0728 18:27:09.576203    3710 main.go:141] libmachine: (ha-168000) Calling .GetSSHKeyPath
	I0728 18:27:09.576276    3710 main.go:141] libmachine: (ha-168000) Calling .GetSSHUsername
	I0728 18:27:09.576359    3710 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000/id_rsa Username:docker}
	I0728 18:27:09.613651    3710 ssh_runner.go:195] Run: systemctl --version
	I0728 18:27:09.618062    3710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:27:09.628970    3710 kubeconfig.go:125] found "ha-168000" server: "https://192.169.0.254:8443"
	I0728 18:27:09.629002    3710 api_server.go:166] Checking apiserver status ...
	I0728 18:27:09.629042    3710 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:27:09.640452    3710 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2083/cgroup
	W0728 18:27:09.648355    3710 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2083/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0728 18:27:09.648419    3710 ssh_runner.go:195] Run: ls
	I0728 18:27:09.651726    3710 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0728 18:27:09.655898    3710 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0728 18:27:09.655911    3710 status.go:422] ha-168000 apiserver status = Running (err=<nil>)
	I0728 18:27:09.655924    3710 status.go:257] ha-168000 status: &{Name:ha-168000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 18:27:09.655937    3710 status.go:255] checking status of ha-168000-m02 ...
	I0728 18:27:09.656221    3710 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:27:09.656242    3710 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:27:09.665067    3710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51827
	I0728 18:27:09.665449    3710 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:27:09.665836    3710 main.go:141] libmachine: Using API Version  1
	I0728 18:27:09.665853    3710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:27:09.666059    3710 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:27:09.666165    3710 main.go:141] libmachine: (ha-168000-m02) Calling .GetState
	I0728 18:27:09.666250    3710 main.go:141] libmachine: (ha-168000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:27:09.666323    3710 main.go:141] libmachine: (ha-168000-m02) DBG | hyperkit pid from json: 3278
	I0728 18:27:09.667312    3710 main.go:141] libmachine: (ha-168000-m02) DBG | hyperkit pid 3278 missing from process table
	I0728 18:27:09.667335    3710 status.go:330] ha-168000-m02 host status = "Stopped" (err=<nil>)
	I0728 18:27:09.667342    3710 status.go:343] host is not running, skipping remaining checks
	I0728 18:27:09.667350    3710 status.go:257] ha-168000-m02 status: &{Name:ha-168000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 18:27:09.667361    3710 status.go:255] checking status of ha-168000-m03 ...
	I0728 18:27:09.667642    3710 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:27:09.667670    3710 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:27:09.676127    3710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51829
	I0728 18:27:09.676480    3710 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:27:09.676823    3710 main.go:141] libmachine: Using API Version  1
	I0728 18:27:09.676842    3710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:27:09.677059    3710 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:27:09.677172    3710 main.go:141] libmachine: (ha-168000-m03) Calling .GetState
	I0728 18:27:09.677257    3710 main.go:141] libmachine: (ha-168000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:27:09.677337    3710 main.go:141] libmachine: (ha-168000-m03) DBG | hyperkit pid from json: 3296
	I0728 18:27:09.678309    3710 status.go:330] ha-168000-m03 host status = "Running" (err=<nil>)
	I0728 18:27:09.678318    3710 host.go:66] Checking if "ha-168000-m03" exists ...
	I0728 18:27:09.678574    3710 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:27:09.678598    3710 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:27:09.687161    3710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51831
	I0728 18:27:09.687540    3710 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:27:09.687892    3710 main.go:141] libmachine: Using API Version  1
	I0728 18:27:09.687906    3710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:27:09.688134    3710 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:27:09.688272    3710 main.go:141] libmachine: (ha-168000-m03) Calling .GetIP
	I0728 18:27:09.688360    3710 host.go:66] Checking if "ha-168000-m03" exists ...
	I0728 18:27:09.688627    3710 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:27:09.688652    3710 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:27:09.697102    3710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51833
	I0728 18:27:09.697454    3710 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:27:09.697800    3710 main.go:141] libmachine: Using API Version  1
	I0728 18:27:09.697816    3710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:27:09.698031    3710 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:27:09.698151    3710 main.go:141] libmachine: (ha-168000-m03) Calling .DriverName
	I0728 18:27:09.698270    3710 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 18:27:09.698282    3710 main.go:141] libmachine: (ha-168000-m03) Calling .GetSSHHostname
	I0728 18:27:09.698363    3710 main.go:141] libmachine: (ha-168000-m03) Calling .GetSSHPort
	I0728 18:27:09.698436    3710 main.go:141] libmachine: (ha-168000-m03) Calling .GetSSHKeyPath
	I0728 18:27:09.698521    3710 main.go:141] libmachine: (ha-168000-m03) Calling .GetSSHUsername
	I0728 18:27:09.698600    3710 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000-m03/id_rsa Username:docker}
	I0728 18:27:09.731814    3710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:27:09.743037    3710 kubeconfig.go:125] found "ha-168000" server: "https://192.169.0.254:8443"
	I0728 18:27:09.743052    3710 api_server.go:166] Checking apiserver status ...
	I0728 18:27:09.743094    3710 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:27:09.754536    3710 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2089/cgroup
	W0728 18:27:09.762079    3710 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2089/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0728 18:27:09.762132    3710 ssh_runner.go:195] Run: ls
	I0728 18:27:09.765382    3710 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0728 18:27:09.768632    3710 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0728 18:27:09.768643    3710 status.go:422] ha-168000-m03 apiserver status = Running (err=<nil>)
	I0728 18:27:09.768651    3710 status.go:257] ha-168000-m03 status: &{Name:ha-168000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 18:27:09.768661    3710 status.go:255] checking status of ha-168000-m04 ...
	I0728 18:27:09.768953    3710 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:27:09.768980    3710 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:27:09.777552    3710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51837
	I0728 18:27:09.777895    3710 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:27:09.778233    3710 main.go:141] libmachine: Using API Version  1
	I0728 18:27:09.778250    3710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:27:09.778466    3710 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:27:09.778586    3710 main.go:141] libmachine: (ha-168000-m04) Calling .GetState
	I0728 18:27:09.778677    3710 main.go:141] libmachine: (ha-168000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:27:09.778763    3710 main.go:141] libmachine: (ha-168000-m04) DBG | hyperkit pid from json: 3387
	I0728 18:27:09.779741    3710 status.go:330] ha-168000-m04 host status = "Running" (err=<nil>)
	I0728 18:27:09.779753    3710 host.go:66] Checking if "ha-168000-m04" exists ...
	I0728 18:27:09.779996    3710 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:27:09.780017    3710 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:27:09.788489    3710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51839
	I0728 18:27:09.788822    3710 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:27:09.789186    3710 main.go:141] libmachine: Using API Version  1
	I0728 18:27:09.789210    3710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:27:09.789411    3710 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:27:09.789526    3710 main.go:141] libmachine: (ha-168000-m04) Calling .GetIP
	I0728 18:27:09.789609    3710 host.go:66] Checking if "ha-168000-m04" exists ...
	I0728 18:27:09.789862    3710 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:27:09.789884    3710 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:27:09.798287    3710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51841
	I0728 18:27:09.798634    3710 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:27:09.798964    3710 main.go:141] libmachine: Using API Version  1
	I0728 18:27:09.798980    3710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:27:09.799173    3710 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:27:09.799289    3710 main.go:141] libmachine: (ha-168000-m04) Calling .DriverName
	I0728 18:27:09.799407    3710 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 18:27:09.799418    3710 main.go:141] libmachine: (ha-168000-m04) Calling .GetSSHHostname
	I0728 18:27:09.799484    3710 main.go:141] libmachine: (ha-168000-m04) Calling .GetSSHPort
	I0728 18:27:09.799560    3710 main.go:141] libmachine: (ha-168000-m04) Calling .GetSSHKeyPath
	I0728 18:27:09.799673    3710 main.go:141] libmachine: (ha-168000-m04) Calling .GetSSHUsername
	I0728 18:27:09.799760    3710 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/ha-168000-m04/id_rsa Username:docker}
	I0728 18:27:09.832634    3710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:27:09.842747    3710 status.go:257] ha-168000-m04 status: &{Name:ha-168000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (42.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 node start m02 -v=7 --alsologtostderr
E0728 18:27:22.871620    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-168000 node start m02 -v=7 --alsologtostderr: (42.275292569s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (42.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (229.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-168000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-168000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-168000 -v=7 --alsologtostderr: (27.093076043s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-168000 --wait=true -v=7 --alsologtostderr
E0728 18:28:44.791664    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 18:28:53.054351    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 18:30:49.986776    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 18:31:00.903069    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 18:31:28.630293    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-168000 --wait=true -v=7 --alsologtostderr: (3m22.619693741s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-168000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (229.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-168000 node delete m03 -v=7 --alsologtostderr: (7.651216953s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-168000 stop -v=7 --alsologtostderr: (24.895415078s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-168000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-168000 status -v=7 --alsologtostderr: exit status 7 (89.263962ms)

                                                
                                                
-- stdout --
	ha-168000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-168000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-168000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:32:16.378582    4140 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:32:16.378855    4140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:32:16.378860    4140 out.go:304] Setting ErrFile to fd 2...
	I0728 18:32:16.378864    4140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:32:16.379062    4140 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 18:32:16.379244    4140 out.go:298] Setting JSON to false
	I0728 18:32:16.379266    4140 mustload.go:65] Loading cluster: ha-168000
	I0728 18:32:16.379303    4140 notify.go:220] Checking for updates...
	I0728 18:32:16.379578    4140 config.go:182] Loaded profile config "ha-168000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:32:16.379593    4140 status.go:255] checking status of ha-168000 ...
	I0728 18:32:16.379937    4140 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:32:16.379998    4140 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:32:16.388717    4140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52152
	I0728 18:32:16.389084    4140 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:32:16.389494    4140 main.go:141] libmachine: Using API Version  1
	I0728 18:32:16.389504    4140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:32:16.389727    4140 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:32:16.389858    4140 main.go:141] libmachine: (ha-168000) Calling .GetState
	I0728 18:32:16.389955    4140 main.go:141] libmachine: (ha-168000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:32:16.390081    4140 main.go:141] libmachine: (ha-168000) DBG | hyperkit pid from json: 3788
	I0728 18:32:16.390935    4140 main.go:141] libmachine: (ha-168000) DBG | hyperkit pid 3788 missing from process table
	I0728 18:32:16.390968    4140 status.go:330] ha-168000 host status = "Stopped" (err=<nil>)
	I0728 18:32:16.390978    4140 status.go:343] host is not running, skipping remaining checks
	I0728 18:32:16.390985    4140 status.go:257] ha-168000 status: &{Name:ha-168000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 18:32:16.391008    4140 status.go:255] checking status of ha-168000-m02 ...
	I0728 18:32:16.391248    4140 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:32:16.391269    4140 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:32:16.399452    4140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52154
	I0728 18:32:16.399840    4140 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:32:16.400243    4140 main.go:141] libmachine: Using API Version  1
	I0728 18:32:16.400261    4140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:32:16.400482    4140 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:32:16.400597    4140 main.go:141] libmachine: (ha-168000-m02) Calling .GetState
	I0728 18:32:16.400689    4140 main.go:141] libmachine: (ha-168000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:32:16.400770    4140 main.go:141] libmachine: (ha-168000-m02) DBG | hyperkit pid from json: 3798
	I0728 18:32:16.401676    4140 status.go:330] ha-168000-m02 host status = "Stopped" (err=<nil>)
	I0728 18:32:16.401685    4140 status.go:343] host is not running, skipping remaining checks
	I0728 18:32:16.401692    4140 status.go:257] ha-168000-m02 status: &{Name:ha-168000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 18:32:16.401703    4140 status.go:255] checking status of ha-168000-m04 ...
	I0728 18:32:16.401688    4140 main.go:141] libmachine: (ha-168000-m02) DBG | hyperkit pid 3798 missing from process table
	I0728 18:32:16.401956    4140 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:32:16.401976    4140 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:32:16.410066    4140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52157
	I0728 18:32:16.410371    4140 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:32:16.410753    4140 main.go:141] libmachine: Using API Version  1
	I0728 18:32:16.410770    4140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:32:16.410953    4140 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:32:16.411064    4140 main.go:141] libmachine: (ha-168000-m04) Calling .GetState
	I0728 18:32:16.411143    4140 main.go:141] libmachine: (ha-168000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:32:16.411220    4140 main.go:141] libmachine: (ha-168000-m04) DBG | hyperkit pid from json: 4085
	I0728 18:32:16.412103    4140 main.go:141] libmachine: (ha-168000-m04) DBG | hyperkit pid 4085 missing from process table
	I0728 18:32:16.412127    4140 status.go:330] ha-168000-m04 host status = "Stopped" (err=<nil>)
	I0728 18:32:16.412132    4140 status.go:343] host is not running, skipping remaining checks
	I0728 18:32:16.412138    4140 status.go:257] ha-168000-m04 status: &{Name:ha-168000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.99s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (38.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-783000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-783000 --driver=hyperkit : (38.748137035s)
--- PASS: TestImageBuild/serial/Setup (38.75s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.57s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-783000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-783000: (1.573728662s)
--- PASS: TestImageBuild/serial/NormalBuild (1.57s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-783000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.70s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-783000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.64s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.6s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-783000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-028000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-028000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (51.700482962s)
--- PASS: TestJSONOutput/start/Command (51.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-028000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-028000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-028000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-028000 --output=json --user=testUser: (8.336304038s)
--- PASS: TestJSONOutput/stop/Command (8.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.57s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-327000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-327000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (360.547694ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"78970a1b-dbb6-4dd1-bced-89b7eee253de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-327000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"77fda921-fc80-4575-b153-69619d086952","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"4d990eab-542f-4a90-a214-709644b519b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig"}}
	{"specversion":"1.0","id":"33acf9c2-c850-493d-807d-9580405ffb3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"5f29075f-e105-4978-b0ce-844649a02896","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2b29be0b-ad86-49d7-b8b9-e0bef1fb331d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube"}}
	{"specversion":"1.0","id":"1a2a8b70-e2f6-455a-90e8-ab2253383e6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c15f6e51-1e2f-45bf-a828-750c359774c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-327000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-327000
--- PASS: TestErrorJSONOutput (0.57s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (92.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-332000 --driver=hyperkit 
E0728 18:35:49.980910    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 18:36:00.897807    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-332000 --driver=hyperkit : (41.560452573s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-335000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-335000 --driver=hyperkit : (41.697452409s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-332000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-335000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-335000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-335000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-335000: (3.428005151s)
helpers_test.go:175: Cleaning up "first-332000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-332000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-332000: (5.248620804s)
--- PASS: TestMinikubeProfile (92.70s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (113.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-362000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0728 18:40:49.975579    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 18:41:00.892088    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-362000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m53.386442688s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-362000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (113.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-362000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-362000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-362000 -- rollout status deployment/busybox: (2.647007858s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-362000 -- exec busybox-fc5497c4f-8hq8g -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-362000 -- exec busybox-fc5497c4f-svnlx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-362000 -- exec busybox-fc5497c4f-8hq8g -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-362000 -- exec busybox-fc5497c4f-svnlx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-362000 -- exec busybox-fc5497c4f-8hq8g -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-362000 -- exec busybox-fc5497c4f-svnlx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-362000 -- exec busybox-fc5497c4f-8hq8g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-362000 -- exec busybox-fc5497c4f-8hq8g -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-362000 -- exec busybox-fc5497c4f-svnlx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-362000 -- exec busybox-fc5497c4f-svnlx -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-362000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (8.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-362000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-362000 node stop m03: (8.342075954s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-362000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-362000 status: exit status 7 (243.903981ms)

                                                
                                                
-- stdout --
	multinode-362000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-362000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-362000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-362000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-362000 status --alsologtostderr: exit status 7 (238.435845ms)

                                                
                                                
-- stdout --
	multinode-362000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-362000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-362000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:42:52.660202    4620 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:42:52.660482    4620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:42:52.660488    4620 out.go:304] Setting ErrFile to fd 2...
	I0728 18:42:52.660491    4620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:42:52.660685    4620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 18:42:52.660863    4620 out.go:298] Setting JSON to false
	I0728 18:42:52.660886    4620 mustload.go:65] Loading cluster: multinode-362000
	I0728 18:42:52.660931    4620 notify.go:220] Checking for updates...
	I0728 18:42:52.661253    4620 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:42:52.661269    4620 status.go:255] checking status of multinode-362000 ...
	I0728 18:42:52.661688    4620 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:42:52.661732    4620 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:42:52.670571    4620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52756
	I0728 18:42:52.670910    4620 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:42:52.671290    4620 main.go:141] libmachine: Using API Version  1
	I0728 18:42:52.671300    4620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:42:52.671489    4620 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:42:52.671607    4620 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:42:52.671684    4620 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:42:52.671759    4620 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4468
	I0728 18:42:52.672691    4620 status.go:330] multinode-362000 host status = "Running" (err=<nil>)
	I0728 18:42:52.672709    4620 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:42:52.672956    4620 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:42:52.672977    4620 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:42:52.681254    4620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52758
	I0728 18:42:52.681600    4620 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:42:52.681921    4620 main.go:141] libmachine: Using API Version  1
	I0728 18:42:52.681938    4620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:42:52.682181    4620 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:42:52.682308    4620 main.go:141] libmachine: (multinode-362000) Calling .GetIP
	I0728 18:42:52.682397    4620 host.go:66] Checking if "multinode-362000" exists ...
	I0728 18:42:52.682651    4620 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:42:52.682675    4620 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:42:52.690905    4620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52760
	I0728 18:42:52.691214    4620 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:42:52.691542    4620 main.go:141] libmachine: Using API Version  1
	I0728 18:42:52.691552    4620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:42:52.691749    4620 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:42:52.691872    4620 main.go:141] libmachine: (multinode-362000) Calling .DriverName
	I0728 18:42:52.692016    4620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 18:42:52.692038    4620 main.go:141] libmachine: (multinode-362000) Calling .GetSSHHostname
	I0728 18:42:52.692114    4620 main.go:141] libmachine: (multinode-362000) Calling .GetSSHPort
	I0728 18:42:52.692193    4620 main.go:141] libmachine: (multinode-362000) Calling .GetSSHKeyPath
	I0728 18:42:52.692272    4620 main.go:141] libmachine: (multinode-362000) Calling .GetSSHUsername
	I0728 18:42:52.692353    4620 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000/id_rsa Username:docker}
	I0728 18:42:52.721207    4620 ssh_runner.go:195] Run: systemctl --version
	I0728 18:42:52.725672    4620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:42:52.737335    4620 kubeconfig.go:125] found "multinode-362000" server: "https://192.169.0.13:8443"
	I0728 18:42:52.737360    4620 api_server.go:166] Checking apiserver status ...
	I0728 18:42:52.737396    4620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:42:52.749418    4620 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2038/cgroup
	W0728 18:42:52.757599    4620 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2038/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0728 18:42:52.757638    4620 ssh_runner.go:195] Run: ls
	I0728 18:42:52.760807    4620 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0728 18:42:52.763893    4620 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0728 18:42:52.763905    4620 status.go:422] multinode-362000 apiserver status = Running (err=<nil>)
	I0728 18:42:52.763914    4620 status.go:257] multinode-362000 status: &{Name:multinode-362000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 18:42:52.763926    4620 status.go:255] checking status of multinode-362000-m02 ...
	I0728 18:42:52.764166    4620 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:42:52.764187    4620 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:42:52.772667    4620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52764
	I0728 18:42:52.773018    4620 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:42:52.773360    4620 main.go:141] libmachine: Using API Version  1
	I0728 18:42:52.773373    4620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:42:52.773568    4620 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:42:52.773677    4620 main.go:141] libmachine: (multinode-362000-m02) Calling .GetState
	I0728 18:42:52.773765    4620 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:42:52.773837    4620 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4486
	I0728 18:42:52.774771    4620 status.go:330] multinode-362000-m02 host status = "Running" (err=<nil>)
	I0728 18:42:52.774781    4620 host.go:66] Checking if "multinode-362000-m02" exists ...
	I0728 18:42:52.775045    4620 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:42:52.775072    4620 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:42:52.783444    4620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52766
	I0728 18:42:52.783838    4620 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:42:52.784243    4620 main.go:141] libmachine: Using API Version  1
	I0728 18:42:52.784259    4620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:42:52.784508    4620 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:42:52.784612    4620 main.go:141] libmachine: (multinode-362000-m02) Calling .GetIP
	I0728 18:42:52.784702    4620 host.go:66] Checking if "multinode-362000-m02" exists ...
	I0728 18:42:52.784966    4620 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:42:52.784987    4620 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:42:52.793269    4620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52768
	I0728 18:42:52.793614    4620 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:42:52.793933    4620 main.go:141] libmachine: Using API Version  1
	I0728 18:42:52.793946    4620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:42:52.794155    4620 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:42:52.794277    4620 main.go:141] libmachine: (multinode-362000-m02) Calling .DriverName
	I0728 18:42:52.794403    4620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 18:42:52.794415    4620 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHHostname
	I0728 18:42:52.794490    4620 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHPort
	I0728 18:42:52.794564    4620 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHKeyPath
	I0728 18:42:52.794652    4620 main.go:141] libmachine: (multinode-362000-m02) Calling .GetSSHUsername
	I0728 18:42:52.794736    4620 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1006/.minikube/machines/multinode-362000-m02/id_rsa Username:docker}
	I0728 18:42:52.823131    4620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:42:52.833261    4620 status.go:257] multinode-362000-m02 status: &{Name:multinode-362000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0728 18:42:52.833276    4620 status.go:255] checking status of multinode-362000-m03 ...
	I0728 18:42:52.833545    4620 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:42:52.833567    4620 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:42:52.842249    4620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52771
	I0728 18:42:52.842605    4620 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:42:52.842908    4620 main.go:141] libmachine: Using API Version  1
	I0728 18:42:52.842918    4620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:42:52.843136    4620 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:42:52.843252    4620 main.go:141] libmachine: (multinode-362000-m03) Calling .GetState
	I0728 18:42:52.843336    4620 main.go:141] libmachine: (multinode-362000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:42:52.843408    4620 main.go:141] libmachine: (multinode-362000-m03) DBG | hyperkit pid from json: 4551
	I0728 18:42:52.844330    4620 main.go:141] libmachine: (multinode-362000-m03) DBG | hyperkit pid 4551 missing from process table
	I0728 18:42:52.844364    4620 status.go:330] multinode-362000-m03 host status = "Stopped" (err=<nil>)
	I0728 18:42:52.844372    4620 status.go:343] host is not running, skipping remaining checks
	I0728 18:42:52.844379    4620 status.go:257] multinode-362000-m03 status: &{Name:multinode-362000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (8.83s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (146.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-362000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-362000 node start m03 -v=7 --alsologtostderr: (2m26.300149875s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-362000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (146.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-362000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-362000 node delete m03: (5.764593475s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-362000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-362000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-362000 stop: (16.61197332s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-362000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-362000 status: exit status 7 (78.910216ms)

                                                
                                                
-- stdout --
	multinode-362000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-362000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-362000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-362000 status --alsologtostderr: exit status 7 (79.213468ms)

                                                
                                                
-- stdout --
	multinode-362000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-362000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:49:22.863045    4766 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:49:22.863224    4766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:49:22.863230    4766 out.go:304] Setting ErrFile to fd 2...
	I0728 18:49:22.863234    4766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:49:22.863406    4766 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1006/.minikube/bin
	I0728 18:49:22.863584    4766 out.go:298] Setting JSON to false
	I0728 18:49:22.863611    4766 mustload.go:65] Loading cluster: multinode-362000
	I0728 18:49:22.863658    4766 notify.go:220] Checking for updates...
	I0728 18:49:22.863926    4766 config.go:182] Loaded profile config "multinode-362000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:49:22.863945    4766 status.go:255] checking status of multinode-362000 ...
	I0728 18:49:22.864294    4766 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:49:22.864342    4766 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:49:22.873157    4766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53009
	I0728 18:49:22.873502    4766 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:49:22.873929    4766 main.go:141] libmachine: Using API Version  1
	I0728 18:49:22.873938    4766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:49:22.874210    4766 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:49:22.874334    4766 main.go:141] libmachine: (multinode-362000) Calling .GetState
	I0728 18:49:22.874431    4766 main.go:141] libmachine: (multinode-362000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:49:22.874495    4766 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid from json: 4686
	I0728 18:49:22.875394    4766 main.go:141] libmachine: (multinode-362000) DBG | hyperkit pid 4686 missing from process table
	I0728 18:49:22.875419    4766 status.go:330] multinode-362000 host status = "Stopped" (err=<nil>)
	I0728 18:49:22.875426    4766 status.go:343] host is not running, skipping remaining checks
	I0728 18:49:22.875432    4766 status.go:257] multinode-362000 status: &{Name:multinode-362000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 18:49:22.875458    4766 status.go:255] checking status of multinode-362000-m02 ...
	I0728 18:49:22.875715    4766 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0728 18:49:22.875733    4766 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0728 18:49:22.883957    4766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53011
	I0728 18:49:22.884307    4766 main.go:141] libmachine: () Calling .GetVersion
	I0728 18:49:22.884692    4766 main.go:141] libmachine: Using API Version  1
	I0728 18:49:22.884710    4766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0728 18:49:22.884929    4766 main.go:141] libmachine: () Calling .GetMachineName
	I0728 18:49:22.885028    4766 main.go:141] libmachine: (multinode-362000-m02) Calling .GetState
	I0728 18:49:22.885129    4766 main.go:141] libmachine: (multinode-362000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0728 18:49:22.885189    4766 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid from json: 4695
	I0728 18:49:22.886047    4766 main.go:141] libmachine: (multinode-362000-m02) DBG | hyperkit pid 4695 missing from process table
	I0728 18:49:22.886068    4766 status.go:330] multinode-362000-m02 host status = "Stopped" (err=<nil>)
	I0728 18:49:22.886075    4766 status.go:343] host is not running, skipping remaining checks
	I0728 18:49:22.886081    4766 status.go:257] multinode-362000-m02 status: &{Name:multinode-362000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (101.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-362000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0728 18:50:50.053703    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 18:51:00.972307    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-362000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m41.430444747s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-362000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (101.76s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-362000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-362000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-362000-m02 --driver=hyperkit : exit status 14 (415.631486ms)

                                                
                                                
-- stdout --
	* [multinode-362000-m02] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-362000-m02' is duplicated with machine name 'multinode-362000-m02' in profile 'multinode-362000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-362000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-362000-m03 --driver=hyperkit : (38.349929063s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-362000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-362000: exit status 80 (259.847827ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-362000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-362000-m03 already exists in multinode-362000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-362000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-362000-m03: (3.360701514s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.44s)

                                                
                                    
x
+
TestSkaffold (112.8s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1028256218 version
skaffold_test.go:59: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1028256218 version: (1.790927677s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-014000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-014000 --memory=2600 --driver=hyperkit : (36.178286782s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1028256218 run --minikube-profile skaffold-014000 --kube-context skaffold-014000 --status-check=true --port-forward=false --interactive=false
E0728 18:59:04.087428    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1028256218 run --minikube-profile skaffold-014000 --kube-context skaffold-014000 --status-check=true --port-forward=false --interactive=false: (56.788316883s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-f9f4b96c8-n4bqt" [978b9a46-234a-44b8-9a05-1725e8c7f1cb] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.00353335s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-8545684d5c-t6b4g" [03f7d545-6647-4c44-bc94-65747549f4ef] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003651335s
helpers_test.go:175: Cleaning up "skaffold-014000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-014000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-014000: (5.253138722s)
--- PASS: TestSkaffold (112.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.44s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3973890663 start -p running-upgrade-261000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3973890663 start -p running-upgrade-261000 --memory=2200 --vm-driver=hyperkit : (51.603170238s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-261000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-261000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (24.371527289s)
helpers_test.go:175: Cleaning up "running-upgrade-261000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-261000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-261000: (5.30773186s)
--- PASS: TestRunningBinaryUpgrade (82.44s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.63s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19312
- KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2082238220/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2082238220/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2082238220/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2082238220/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.63s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.39s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19312
- KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current210305625/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current210305625/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current210305625/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current210305625/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (697.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2279658417 start -p stopped-upgrade-624000 --memory=2200 --vm-driver=hyperkit 
E0728 19:29:37.113501    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:30:50.082859    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 19:31:00.997489    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 19:32:24.087176    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
E0728 19:34:37.109093    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
E0728 19:35:33.149170    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 19:35:50.077831    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/addons-967000/client.crt: no such file or directory
E0728 19:36:00.994044    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/functional-596000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2279658417 start -p stopped-upgrade-624000 --memory=2200 --vm-driver=hyperkit : (10m26.553191188s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2279658417 -p stopped-upgrade-624000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2279658417 -p stopped-upgrade-624000 stop: (8.225650792s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-624000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0728 19:37:40.160823    1533 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1006/.minikube/profiles/skaffold-014000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-624000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m2.806496891s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (697.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-624000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-624000: (2.711413976s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-991000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-991000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (445.360897ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-991000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (70.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-991000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-991000 --driver=hyperkit : (1m10.50051179s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-991000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (70.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-991000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-991000 --no-kubernetes --driver=hyperkit : (6.098618616s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-991000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-991000 status -o json: exit status 2 (154.723892ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-991000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-991000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-991000: (2.40490143s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (22.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-991000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-991000 --no-kubernetes --driver=hyperkit : (22.105158293s)
--- PASS: TestNoKubernetes/serial/Start (22.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-991000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-991000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (127.326424ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-991000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-991000: (2.423434566s)
--- PASS: TestNoKubernetes/serial/Stop (2.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-991000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-991000 --driver=hyperkit : (19.386954118s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-991000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-991000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (128.071868ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    

Test skip (20/227)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard